Current Guidance on Compiler Warnings?
I'd like to confirm the guidance on Warnings I find here https://svn.boost.org/trac10/wiki/Guidelines/WarningsGuidelines is still considered current? context ... At Wind River we are in the process of working with Boost 1.68 and VxWorks 7 (with Dinkum 7.00 with and Clang 6.0 for ARM and IA boards and GCC 8.1 for PowerPC ) with the hope of bundling Boost with our product. Many of our customers make certified systems ( Planes, Trains, Medical Equipment, Factory Automation, etc. ) and the trend in theses industries is to be pedantic about eliminating all compiler warnings. While we have not traditionally required zero warnings in open source code shipped with our product, there is pressure on us to move in that direction, and as result we will probably be contributing pull requests specifically to fix or suppress compiler warnings over the coming years. I'd like to establish clear guidelines for my team as to what is an appropriate "coding standard" for Boost in regards to compiler warnings. While it is simple to say, everything displayed by -Wall, in practice there are many edge cases, e.g. is an unused parameter acceptable in test code? So I'd like to get the maintainers guidance. Many thanks Brian Kuhl
On 19/11/2018 19:20, Brian Kuhl via Boost wrote:
I'd like to confirm the guidance on Warnings I find here https://svn.boost.org/trac10/wiki/Guidelines/WarningsGuidelines is still considered current?
More or less - the advice could use updating, and each new compiler release brings new warnings, some busy-body some not, so it's a constant struggle to catch up. Speaking for myself I'm happy to fix or suppress (as appropriate) warnings in my stuff, so bring on the PR's I would say ;) Of course you may have to nag the community maintenance team if you're submitting PR's against unmaintained libraries. HTH, John.
context ...
At Wind River we are in the process of working with Boost 1.68 and VxWorks 7 (with Dinkum 7.00 with and Clang 6.0 for ARM and IA boards and GCC 8.1 for PowerPC ) with the hope of bundling Boost with our product.
Many of our customers make certified systems ( Planes, Trains, Medical Equipment, Factory Automation, etc. ) and the trend in theses industries is to be pedantic about eliminating all compiler warnings.
While we have not traditionally required zero warnings in open source code shipped with our product, there is pressure on us to move in that direction, and as result we will probably be contributing pull requests specifically to fix or suppress compiler warnings over the coming years.
I'd like to establish clear guidelines for my team as to what is an appropriate "coding standard" for Boost in regards to compiler warnings. While it is simple to say, everything displayed by -Wall, in practice there are many edge cases, e.g. is an unused parameter acceptable in test code? So I'd like to get the maintainers guidance.
Many thanks
Brian Kuhl
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
--- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus
On Mon, Nov 19, 2018 at 2:44 PM John Maddock via Boost < boost@lists.boost.org> wrote:
On 19/11/2018 19:20, Brian Kuhl via Boost wrote:
I'd like to confirm the guidance on Warnings I find here https://svn.boost.org/trac10/wiki/Guidelines/WarningsGuidelines is still considered current?
More or less - the advice could use updating, and each new compiler release brings new warnings, some busy-body some not, so it's a constant struggle to catch up.
Speaking for myself I'm happy to fix or suppress (as appropriate) warnings in my stuff, so bring on the PR's I would say ;)
Of course you may have to nag the community maintenance team if you're submitting PR's against unmaintained libraries.
HTH, John.
Nagging will not be required; I prefer warning-free builds as well. None of the CMT repositories use "warnings-as-errors" as a build setting however, so the CI builds will not test or guarantee no warnings. Would love to get there. A list of CMT repositories can be found here: https://svn.boost.org/trac10/wiki/CommunityMaintenance - Jim
-----Original Message----- From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of John Maddock via Boost Sent: 19 November 2018 19:44 To: Brian Kuhl via Boost Cc: John Maddock Subject: Re: [boost] Current Guidance on Compiler Warnings?
On 19/11/2018 19:20, Brian Kuhl via Boost wrote:
I'd like to confirm the guidance on Warnings I find here https://svn.boost.org/trac10/wiki/Guidelines/WarningsGuidelines is still considered current?
More or less - the advice could use updating, and each new compiler release brings new warnings, some busy-body some not, so it's a constant struggle to catch up.
At Wind River we are in the process of working with Boost 1.68 and VxWorks 7 (with Dinkum 7.00 with and Clang 6.0 for ARM and IA boards and GCC 8.1 for PowerPC ) with the hope of bundling Boost with our product.
Many of our customers make certified systems ( Planes, Trains, Medical Equipment, Factory Automation, etc. ) and the trend in theses industries is to be pedantic about eliminating all compiler warnings.
While we have not traditionally required zero warnings in open source code shipped with our product, there is pressure on us to move in that direction, and as result we will probably be contributing pull requests specifically to fix or suppress compiler warnings over the coming years.
I'd like to establish clear guidelines for my team as to what is an appropriate "coding standard" for Boost in regards to compiler warnings. While it is simple to say, everything displayed by -Wall, in practice there are many edge cases, e.g. is an unused parameter acceptable in test code? So I'd like to get the maintainers guidance. Brian Kuhl
Triggered by this thread, I went to consider updating https://svn.boost.org/trac10/wiki/Guidelines/WarningsGuidelines but that is now frozen, so I can't do change it. In particular I wanted to add a link to https://blogs.msdn.microsoft.com/vcblog/2017/12/13/broken-warnings-theory/ which discusses the issue, and adds info on some additional finer-grained warning suppression aids added in recent VS 15.6 ... I'm willing to get things rolling with the current guidelines, but IMO, it should be freely editable wiki so that anyone can add (or delete). Seems that is need a new section somewhere on boost.org but I'd like views and guidance on where, what and how. Paul --- Paul A. Bristow Prizet Farmhouse Kendal UK LA8 8AB +44 (0) 1539 561830
On Mon, Nov 19, 2018 at 11:21 AM Brian Kuhl via Boost
I'd like to confirm the guidance on Warnings I find here https://svn.boost.org/trac10/wiki/Guidelines/WarningsGuidelines is still considered current?
context ...
At Wind River we are in the process of working with Boost 1.68 and VxWorks 7 (with Dinkum 7.00 with and Clang 6.0 for ARM and IA boards and GCC
8.1
for PowerPC ) with the hope of bundling Boost with our product.
Many of our customers make certified systems ( Planes, Trains, Medical Equipment, Factory Automation, etc. ) and the trend in theses industries is to be pedantic about eliminating all compiler warnings.
In that context there are probably legal reason for zero-warning policy, but it is not true that lack of warnings means fewer errors, in fact it could easily lead to more errors. For example warnings about implicit conversions are frequently "fixed" by replacing the implicit conversion with explicit conversion (cast). But the two are semantically very different, and therefore one of them is incorrect, and very likely that is the explicit one.
On Monday, 19 November 2018 21:02:55 CET Emil Dotchevski via Boost wrote:
In that context there are probably legal reason for zero-warning policy, but it is not true that lack of warnings means fewer errors, in fact it could easily lead to more errors. For example warnings about implicit conversions are frequently "fixed" by replacing the implicit conversion with explicit conversion (cast). But the two are semantically very different, and therefore one of them is incorrect, and very likely that is the explicit one.
Didn't know that. Can I see an example where the explicit warning is incorrect as opposed to implicit warning? Thanks -- Cheers Jayesh Badwaik https://jayeshbadwaik.github.io
On Thu, Nov 22, 2018 at 12:56 AM Jayesh Badwaik via Boost < boost@lists.boost.org> wrote:
On Monday, 19 November 2018 21:02:55 CET Emil Dotchevski via Boost wrote:
In that context there are probably legal reason for zero-warning policy, but it is not true that lack of warnings means fewer errors, in fact it could easily lead to more errors. For example warnings about implicit conversions are frequently "fixed" by replacing the implicit conversion with explicit conversion (cast). But the two are semantically very different, and therefore one of them is incorrect, and very likely that
is
the explicit one.
Didn't know that. Can I see an example where the explicit warning is incorrect as opposed to implicit warning?
I'm guessing you want an example where the explicit conversion is incorrect, and you want to use an implicit conversion even though you get a warning. Here: unsigned f(); void g( int x ) { if( x < f() ) //warning C4018: '<': signed/unsigned mismatch { .... } } So you change that to: if( static_cast<unsigned>(x) < f() ) Then under refactoring both f and g get changed: unsigned long f(); void g( long x ) { if( static_cast<unsigned>(x) < f() ) { .... } } And now you probably have a bug, and no warning. I say probably, because if this was my code, I'd know the cast is correct because my choice of implicit vs. explicit conversion does not depend on what warnings I get from the compiler, I do what is right. But if you've instructed your programmers to use casts to silence warnings, you never know why that cast sits there. Assuming the implicit conversion is correct (it probably is, but you should always know), this should be written as: assert(x>=0); if( x < f() )
On Fri, 23 Nov 2018 at 20:59, Emil Dotchevski via Boost < boost@lists.boost.org> wrote:
On Thu, Nov 22, 2018 at 12:56 AM Jayesh Badwaik via Boost < boost@lists.boost.org> wrote:
On Monday, 19 November 2018 21:02:55 CET Emil Dotchevski via Boost wrote:
In that context there are probably legal reason for zero-warning
policy,
but it is not true that lack of warnings means fewer errors, in fact it could easily lead to more errors. For example warnings about implicit conversions are frequently "fixed" by replacing the implicit conversion with explicit conversion (cast). But the two are semantically very different, and therefore one of them is incorrect, and very likely that is the explicit one.
Didn't know that. Can I see an example where the explicit warning is incorrect as opposed to implicit warning?
I'm guessing you want an example where the explicit conversion is incorrect, and you want to use an implicit conversion even though you get a warning. Here:
unsigned f();
void g( int x ) { if( x < f() ) //warning C4018: '<': signed/unsigned mismatch { .... } }
So you change that to:
if( static_cast<unsigned>(x) < f() )
Better yet, provide a little library boilerplate and make the conversion
explicit, correct and checkable at compile time:
void g( int x )
{
using result_type = decltype(f());
if(auto x_ = convert(to_type
Then under refactoring both f and g get changed:
unsigned long f();
void g( long x ) { if( static_cast<unsigned>(x) < f() ) { .... } }
And now you probably have a bug, and no warning. I say probably, because if this was my code, I'd know the cast is correct because my choice of implicit vs. explicit conversion does not depend on what warnings I get from the compiler, I do what is right. But if you've instructed your programmers to use casts to silence warnings, you never know why that cast sits there.
Assuming the implicit conversion is correct (it probably is, but you should always know), this should be written as:
assert(x>=0); if( x < f() )
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- Richard Hodges hodges.r@gmail.com office: +442032898513 home: +376841522 mobile: +376380212 (this will be *expensive* outside Andorra!) skype: madmongo facebook: hodges.r
On Fri, Nov 23, 2018 at 9:48 PM Richard Hodges via Boost < boost@lists.boost.org> wrote:
On Fri, 23 Nov 2018 at 20:59, Emil Dotchevski via Boost < boost@lists.boost.org> wrote:
On Thu, Nov 22, 2018 at 12:56 AM Jayesh Badwaik via Boost < boost@lists.boost.org> wrote:
On Monday, 19 November 2018 21:02:55 CET Emil Dotchevski via Boost
In that context there are probably legal reason for zero-warning policy, but it is not true that lack of warnings means fewer errors, in fact it could easily lead to more errors. For example warnings about implicit conversions are frequently "fixed" by replacing the implicit conversion with explicit conversion (cast). But the two are semantically very different, and therefore one of them is incorrect, and very likely
wrote: that
is
the explicit one.
Didn't know that. Can I see an example where the explicit warning is incorrect as opposed to implicit warning?
I'm guessing you want an example where the explicit conversion is incorrect, and you want to use an implicit conversion even though you get a warning. Here:
unsigned f();
void g( int x ) { if( x < f() ) //warning C4018: '<': signed/unsigned mismatch { .... } }
So you change that to:
if( static_cast<unsigned>(x) < f() )
Better yet, provide a little library boilerplate and make the conversion explicit, correct and checkable at compile time:
void g( int x ) { using result_type = decltype(f()); if(auto x_ = convert(to_type
, x) ; x_ < f()) { foo(); } }
No, thank you, I prefer if( x < f() )
On Friday, 23 November 2018 20:58:44 CET Emil Dotchevski via Boost wrote:
So you change that to:
if( static_cast<unsigned>(x) < f() )
I would change the opposite, I would do if( x < static_cast<int>(f())) then, if the type of `x` changed in the function, I will now get a new warning, and I can fix it. -- Best Jayesh Badwaik https://jayeshbadwaik.github.io
On 11/24/18 2:44 PM, Jayesh Badwaik via Boost wrote:
On Friday, 23 November 2018 20:58:44 CET Emil Dotchevski via Boost wrote:
So you change that to:
if( static_cast<unsigned>(x) < f() )
I would change the opposite, I would do
if( x < static_cast<int>(f()))
then, if the type of `x` changed in the function, I will now get a new warning, and I can fix it.
This changes the semantics of the comparison.
On Sat, Nov 24, 2018 at 3:44 AM Jayesh Badwaik via Boost < boost@lists.boost.org> wrote:
On Friday, 23 November 2018 20:58:44 CET Emil Dotchevski via Boost wrote:
So you change that to:
if( static_cast<unsigned>(x) < f() )
I would change the opposite, I would do
if( x < static_cast<int>(f()))
then, if the type of `x` changed in the function, I will now get a new warning, and I can fix it.
What warning will you get?
Am 23.11.2018 um 20:58 schrieb Emil Dotchevski via Boost:
unsigned f();
void g( int x ) { if( x < f() ) //warning C4018: '<': signed/unsigned mismatch { .... } }
The only problem that I can see here is the fact, that this is flagged as a warning rather than an error. I know, this is technically correct but you simply cannot compare values from different value domains without preconditions. Not stating the preconditions is an error. If you state them and assure that the actual values of both 'x' and 'f()' can be equally represented as 'int' oder 'unsigned' then and only then the comparison is correct and a explicit cast to either type is a valid one. Ciao Dani
On 11/24/2018 7:55 AM, Daniela Engert via Boost wrote:
Am 23.11.2018 um 20:58 schrieb Emil Dotchevski via Boost:
unsigned f();
void g( int x ) { if( x < f() ) //warning C4018: '<': signed/unsigned mismatch { .... } }
The only problem that I can see here is the fact, that this is flagged as a warning rather than an error. I know, this is technically correct but you simply cannot compare values from different value domains without preconditions. Not stating the preconditions is an error. If you state them and assure that the actual values of both 'x' and 'f()' can be equally represented as 'int' oder 'unsigned' then and only then the comparison is correct and a explicit cast to either type is a valid one.
Errors should be when you are not following the C++ standard. If you think that comparing a signed number with an unsigned number is not following the C++ standard you should reread the C++ standard. I do not mind this particular warning at all and go out of my way to fix it, but it is not an error in the C++ standard. This is what bothers me most about the idea that all warnings must be addressed in Boost libraries. People do not seem to understand that a warning should not mean that the C++ standard is not being followed. Some compilers actually do produce warnings when the C++ standard is not being followed and unfortunately IMO the C++ standard allows this. So now we have warnings from some compilers that run the gamut from actual errors because the C++ standard is not being followed to essentially lint-like worries that some syntax may be hiding incorrect use of C++ even when the code is perfectly correct. Therefore while warnings should be heeded I do not think it is possible in practical use to fix all warnings for all compiler implementations without making code even more obfuscated and more confusing than it normally is. It is an ideal to fix all warnings but it is not a realistic goal in quite a number of situations.
Ciao Dani
Am 24.11.2018 um 15:52 schrieb Edward Diener via Boost:
On 11/24/2018 7:55 AM, Daniela Engert via Boost wrote:
Am 23.11.2018 um 20:58 schrieb Emil Dotchevski via Boost:
unsigned f();
void g( int x ) { if( x < f() ) //warning C4018: '<': signed/unsigned mismatch { .... } }
The only problem that I can see here is the fact, that this is flagged as a warning rather than an error. I know, this is technically correct but you simply cannot compare values from different value domains without preconditions. Not stating the preconditions is an error. If you state them and assure that the actual values of both 'x' and 'f()' can be equally represented as 'int' oder 'unsigned' then and only then the comparison is correct and a explicit cast to either type is a valid one.
Errors should be when you are not following the C++ standard. If you think that comparing a signed number with an unsigned number is not following the C++ standard you should reread the C++ standard.
I've already acknowledged that this is not an error according to the C++ standard. Nonetheless is it an error in the mathematical sense that it *may* invert the meaning of 'operator<' because of the implicit conversion rules of C++.
I do not mind this particular warning at all and go out of my way to fix it, but it is not an error in the C++ standard.
Right. Situations like these must be acted on. Just letting these go "because it is correct by the standard" is inacceptable imho.
This is what bothers me most about the idea that all warnings must be addressed in Boost libraries.
In most cases, I simply turn off compiler warnings by a #pragma after I convinced myself that "everything is fine, nothing to see here, get along". The simple fact that someone has audited warnings, acted accordingly, and then documented this - may be by just suppressing them in the source code - is an indicator of due diligence.
Therefore while warnings should be heeded I do not think it is possible in practical use to fix all warnings for all compiler implementations without making code even more obfuscated and more confusing than it normally is. It is an ideal to fix all warnings but it is not a realistic goal in quite a number of situations.
True. And I think earlier versions of MSVC went way overboard spitting out warnings about totally reasonable and perfectly valid code. The versions up to vs2013 were just a pain in the a**, and I am happy to compile my code using latest vs2017 only these days which is much more sensical in this regard. (I can only speak about experiences with msvc, it's the only compiler that we can use). Nevertheless, I made Boost compile - at least in our codebase - on vs2013 without warnings at /W4 after following the procedure described above: audit and act upon. Ciao Dani
-----Original Message----- From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Daniela Engert via Boost Sent: 24 November 2018 17:32 To: Boost Developers Cc: Daniela Engert Subject: Re: [boost] Current Guidance on Compiler Warnings?
<snip>
This is what bothers me most about the idea that all warnings must be addressed in Boost libraries.
In most cases, I simply turn off compiler warnings by a #pragma after I convinced myself that "everything is fine, nothing to see here, get along". The simple fact that someone has audited warnings, acted accordingly, and then documented this - may be by just suppressing them in the source code - is an indicator of due diligence.
+1 I agree that supressing is the right thing to do, preferably with a comment justifying why.
Therefore while warnings should be heeded I do not think it is possible in practical use to fix all warnings for all compiler implementations without making code even more obfuscated and more confusing than it normally is. It is an ideal to fix all warnings but it is not a realistic goal in quite a number of situations.
And one can ensure that all (recent and decent fine-grained warning control) compilers will not issue any warnings, which keeps the novices unconfused by a load of warnings, and most important, lawyers and pedants quiet too. Paul --- Paul A. Bristow Prizet Farmhouse Kendal UK LA8 8AB +44 (0) 1539 561830
Daniela Engert wrote:
Am 23.11.2018 um 20:58 schrieb Emil Dotchevski via Boost:
unsigned f();
void g( int x ) { if( x < f() ) //warning C4018: '<': signed/unsigned mismatch { .... } }
The only problem that I can see here is the fact, that this is flagged as a warning rather than an error. I know, this is technically correct but you simply cannot compare values from different value domains without preconditions.
Making it an error would be a useful first step towards making it work correctly. :-) (It's perfectly possible to compare a value in [INT_MIN, INT_MAX] with a value in [0, UINT_MAX], it's just that the standard says op< needs to do the wrong thing.)
Am 24.11.2018 um 16:52 schrieb Peter Dimov via Boost:
Daniela Engert wrote:
Am 23.11.2018 um 20:58 schrieb Emil Dotchevski via Boost:
unsigned f();
void g( int x ) { if( x < f() ) //warning C4018: '<': signed/unsigned mismatch { .... } }
The only problem that I can see here is the fact, that this is flagged as a warning rather than an error. I know, this is technically correct but you simply cannot compare values from different value domains without preconditions.
Making it an error would be a useful first step towards making it work correctly. :-)
This is exactly what we do in our software. And because of that there must must at least be warnings from the compiler to stop us from committing such mathematical nonsense. Situations like these must be inspected, audited and acted upon by human beings. Throwing the hands in the air and insisting "but the language allows me to" is just asking for disaster. I am working in a business where shrugging and walking away is totally inacceptable because it may do serious damage or kill people.
(It's perfectly possible to compare a value in [INT_MIN, INT_MAX] with a value in [0, UINT_MAX], it's just that the standard says op< needs to do the wrong thing.)
Right, and I am not happy of this heritage from C. There are some Boost libs which are full of *errors* like these. Ciao Dani
On Sat, Nov 24, 2018 at 9:12 AM Daniela Engert via Boost < boost@lists.boost.org> wrote:
Am 24.11.2018 um 16:52 schrieb Peter Dimov via Boost:
Daniela Engert wrote:
Am 23.11.2018 um 20:58 schrieb Emil Dotchevski via Boost:
unsigned f();
void g( int x ) { if( x < f() ) //warning C4018: '<': signed/unsigned mismatch { .... } }
The only problem that I can see here is the fact, that this is flagged as a warning rather than an error. I know, this is technically correct but you simply cannot compare values from different value domains without preconditions.
Making it an error would be a useful first step towards making it work correctly. :-)
This is exactly what we do in our software. And because of that there must must at least be warnings from the compiler to stop us from committing such mathematical nonsense.
It is only nonsense if the value stored in the signed side can be negative, and that's something the programmer should know. Therefore, assert, not cast.
Situations like these must be inspected, audited and acted upon by human beings. Throwing the hands in the air and insisting "but the language allows me to" is just asking for disaster.
It's more like "I need to do this comparison, I know it is fine in this case, and the language allows me to". There is no question that a human being should inspect it, I'm not saying the warning is meaningless. What makes it impractical is that the only way to silence it is to replace the implicit conversion with a cast. The inspecting human being should understand that these are two semantically different operations, and should use the cast only if it is correct (and the implicit conversion is incorrect), not as a means to silence the warning.
On Sat, Nov 24, 2018 at 7:53 AM Peter Dimov via Boost
Daniela Engert wrote:
Am 23.11.2018 um 20:58 schrieb Emil Dotchevski via Boost:
unsigned f();
void g( int x ) { if( x < f() ) //warning C4018: '<': signed/unsigned mismatch { .... } }
The only problem that I can see here is the fact, that this is flagged
a warning rather than an error. I know, this is technically correct but you simply cannot compare values from different value domains without preconditions.
Making it an error would be a useful first step towards making it work correctly. :-)
(It's perfectly possible to compare a value in [INT_MIN, INT_MAX] with a value in [0, UINT_MAX], it's just that the standard says op< needs to do
as the
wrong thing.)
The problem with signed/unsigned mismatch is not just in the comparison, but also in the operations. If unsigned x=2, the expression x-3 may not be meaningfully represented by an unsigned integer, and your proposed change to op< semantics would still produce incorrect result. I do not know if it is possible to solve this problem in all arithmetic operations, but practically speaking it doesn't matter.
Am 24.11.2018 um 19:48 schrieb Emil Dotchevski via Boost:
The problem with signed/unsigned mismatch is not just in the comparison, but also in the operations. If unsigned x=2, the expression x-3 may not be meaningfully represented by an unsigned integer.
Au contraire, mon ami! Unsigned arithmetic is defined in C++ as an abelian ring modulo a value depending on the width of the given unsigned type. In fact, unsigned types are the only useful types if you need well-defined mathematical properties (like I do when it comes to digital signal processing). Ciao Dani
On Sat, Nov 24, 2018 at 11:07 AM Daniela Engert via Boost < boost@lists.boost.org> wrote:
Am 24.11.2018 um 19:48 schrieb Emil Dotchevski via Boost:
The problem with signed/unsigned mismatch is not just in the comparison, but also in the operations. If unsigned x=2, the expression x-3 may not
be
meaningfully represented by an unsigned integer.
Au contraire, mon ami! Unsigned arithmetic is defined in C++ as an abelian ring modulo a value depending on the width of the given unsigned type. In fact, unsigned types are the only useful types if you need well-defined mathematical properties (like I do when it comes to digital signal processing).
Yes, I am aware that unsigned types have defined standard behavior when
wrapping around while with signed ints this is implementation-defined (btw
in signal processing sometimes you want saturation -- which is a valid
implementation for signed operations -- instead of wrapping).
But that's not what I mean. I mean that if you have unsigned x=2 and int y,
the expression x-3
On 11/24/18 7:52 AM, Peter Dimov via Boost wrote:
Daniela Engert wrote:
Making it an error would be a useful first step towards making it work correctly. :-)
(It's perfectly possible to compare a value in [INT_MIN, INT_MAX] with a value in [0, UINT_MAX], it's just that the standard says op< needs to do the wrong thing.)
I appologize in advance for perhaps hijacking the thread, but I can't restrain myself from the opportunity to plug the most recent addition to the boost libaries - safe_numerics. C/C++ "arithmetic" is not really arithmetic. That is, results of the arithmetic operations are not guaranteed to map to their counterparts in the arithmetic of integers. The C/C++ rules for promoting operands to the same type can and do introduce errors. Then the application of operations defined in terms to unbounded integers to the bounded integers we use in our programs can and do return incorrect results. Warnings like the above can definitely help, but can't guarantee that the problem does not occur. It is not possible to prove that these problems cannot occur by visual inspection. We're living a lie. So I invite all parties who write programs which much function as written to take a look a the boost safe numerics library. I think you'll find it interesting. Robert Ramey
On Sat, Nov 24, 2018 at 11:18 AM Robert Ramey via Boost < boost@lists.boost.org> wrote:
On 11/24/18 7:52 AM, Peter Dimov via Boost wrote:
Daniela Engert wrote:
Making it an error would be a useful first step towards making it work correctly. :-)
(It's perfectly possible to compare a value in [INT_MIN, INT_MAX] with a value in [0, UINT_MAX], it's just that the standard says op< needs to do the wrong thing.)
I appologize in advance for perhaps hijacking the thread, but I can't restrain myself from the opportunity to plug the most recent addition to the boost libaries - safe_numerics.
C/C++ "arithmetic" is not really arithmetic. That is, results of the arithmetic operations are not guaranteed to map to their counterparts in the arithmetic of integers. The C/C++ rules for promoting operands to the same type can and do introduce errors.
In C and C++ operands are not promoted to the same type. If you have: short x, y; the type of the expression x+y is not short, it is int.
On 11/24/18 3:11 PM, Emil Dotchevski via Boost wrote:
On Sat, Nov 24, 2018 at 11:18 AM Robert Ramey via Boost <
In C and C++ operands are not promoted to the same type. If you have:
short x, y;
the type of the expression x+y is not short, it is int.
Right. They are both promototed to the same type. That type is int.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On November 23, 2018 2:58:44 PM EST, Emil Dotchevski via Boost
I'm guessing you want an example where the explicit conversion is incorrect, and you want to use an implicit conversion even though you get a warning. Here:
unsigned f();
void g( int x ) { if( x < f() ) //warning C4018: '<': signed/unsigned mismatch { .... } }
So you change that to:
if( static_cast<unsigned>(x) < f() )
Why is that the appropriate cast? If your argument is that such a cast is the likely change applied by an ignorant maintainer trying to silence a warning, then that's not a good argument. An ignorant maintainer can make all sorts of breaking changes.
Then under refactoring both f and g get changed:
unsigned long f();
void g( long x ) { if( static_cast<unsigned>(x) < f() ) { .... } }
And now you probably have a bug, and no warning.
Your point that a cast may not be the best solution to eliminating warnings is valid. Assertions, range checking, or library solutions are necessary to know that operations are safe. With those in place, pragmas and casts can be used to silence warnings. You may argue that the implicit conversion is correct, but it is as much subject to breakage by future maintenance as your example. -- Rob (Sent from my portable computation device.)
On Sat, Nov 24, 2018 at 5:19 PM Rob Stewart via Boost
Your point that a cast may not be the best solution to eliminating warnings is valid. Assertions, range checking, or library solutions are necessary to know that operations are safe. With those in place, pragmas and casts can be used to silence warnings.
I wouldn't place pragmas and casts in the same category. Using a pragma to silence a warning will not alter the program semantically, while a cast will.
You may argue that the implicit conversion is correct, but it is as much subject to breakage by future maintenance as your example.
I could argue that the implicit conversion is more likely to be correct, but I didn't do that. Here is my argument again: 1) A cast has different semantics than an implicit conversion. 2) If the implicit conversion is correct, replacing it with a cast to silence a warning is incorrect. Which of these two points do you think is false?
On November 24, 2018 10:31:22 PM EST, Emil Dotchevski via Boost
Here is my argument again:
1) A cast has different semantics than an implicit conversion.
2) If the implicit conversion is correct, replacing it with a cast to silence a warning is incorrect.
Which of these two points do you think is false?
Both may be true or false. A cast only has different semantics when casting to types different than the implicit conversion. Your assumption, all along, is that the cast is to a different type. -- Rob (Sent from my portable computation device.)
Am 25.11.18 um 04:31 schrieb Emil Dotchevski via Boost:
Here is my argument again:
1) A cast has different semantics than an implicit conversion.
2) If the implicit conversion is correct, replacing it with a cast to silence a warning is incorrect.
Which of these two points do you think is false?
I'd argue both are false because they are to general. If you add "may" then these arguments "may" be true. I'll include your further reply:
The reason is that they have the wrong mindset when approaching the problem.
They think: I get a warning, because the compiler doesn't know that this signed value is always positive, but I know that it is, so I'm going to cast, to tell the compiler that I know what I'm doing.
A better reasoning is: I'm converting a signed value to unsigned value, which will not be correct if the signed value is negative. With this in mind, you'll notice that the cast is utterly useless in catching the actual error the compiler is warning you about.
I can't really follow you here: The compiler not knowing that the signed value (which per definition CAN in general be negative) is always positive and alerting you of this is the whole point. IF you know absolutely sure that this signed value is always non-negative, then why not: a) Change the parameter type to unsigned to communicate this as a precondition of the function b) add a cast to the unsigned parameter type and potentially an assert if this helps future readers and catch precondition violations I don't see where this cast is wrong. It does the same as the implicit cast but explicit and hence conveys *intend*. Disabling this warning might hide instances where such implicit conversions are in fact wrong. I also see a contradiction in your argument: "this signed value is always positive, [...] I know that it is" is a valid reason to "converting a signed value to unsigned value, which will not be correct if the signed value is negative". So why is "the cast [...] utterly useless in catching the actual error"? If you know (or can ensure) the precondition then there is no error and you putting in the cast states that you thought about it. Of course this is a should-be-world. If maintainers just throw in casts *without* checking that the cast is valid then it may be an error: Exactly the one the compiler was warning you about. In conclusion: These warnings are about suspicious code that *needs* to be inspected. After inspection relevant action must be taken which should result in the warning being solved or suppressed so the next reviewer does not need to inspect it again unless surrounding code has changed.
On Mon, 26 Nov 2018 at 10:12, Alexander Grund via Boost < boost@lists.boost.org> wrote:
IF you know absolutely sure that this signed value is always non-negative, then why not: a) Change the parameter type to unsigned to communicate this as a precondition of the function
b) add a cast to the unsigned parameter type and potentially an assert if this helps future readers and catch precondition violations
I don't see where this cast is wrong. It does the same as the implicit cast but explicit and hence conveys *intend*.
But it [any of those options] will never alert you to a violation of that precondition. Making std::size_t unsigned is a mistake IMO.
In conclusion: These warnings are about suspicious code that *needs* to be inspected.
If you do as Emil proposes, run a debug build with [that type of] asserts you'll **know** that **your** surrounding code [or **your** input ] is wrong and you'll **know** something needs fixing (not with a cast, but **your** [surrounding] logic). To me his [Emil's] argument is convincing, casting just hides the problem, and since it's hidden, now you'll have to go try and find it, i.e. more time spent debugging [coz you just shot yourself in the foot and you don't know where the gun is (carrying a concealed weapon is a criminal offense in the US AFAIK ;-) )]. We should have a dynamic_assert b.t.w., i.e. the converse of static_assert (in C++, not the C macro). degski -- *“If something cannot go on forever, it will stop" - Herbert Stein*
Am 26.11.18 um 14:50 schrieb degski:
On Mon, 26 Nov 2018 at 10:12, Alexander Grund via Boost
mailto:boost@lists.boost.org> wrote: IF you know absolutely sure that this signed value is always non-negative, then why not: a) Change the parameter type to unsigned to communicate this as a precondition of the function
b) add a cast to the unsigned parameter type and potentially an assert if this helps future readers and catch precondition violations
I don't see where this cast is wrong. It does the same as the implicit cast but explicit and hence conveys *intend*.
But it [any of those options] will never alert you to a violation of that precondition. Making std::size_t unsigned is a mistake IMO.
- In option a) you get a compiler warning in the calling code when the precondition is violated (conversion from signed to unsigned) - In option b) the assert will tell you that I might agree with the size_t mistake: Using it for sizes may be ok, but as an index type might have been wrong.
If you do as Emil proposes, run a debug build with [that type of] asserts you'll **know** that **your** surrounding code [or **your** input ] is wrong and you'll **know** something needs fixing (not with a cast, but **your** [surrounding] logic). To me his [Emil's] argument is convincing, casting just hides the problem
Correct: The surrounding code needs fixing, not the code with the now-added cast. The warning about the "missing" cast can never point you to the surrounding code. It is a violation of the contract of the function to be called with a negative value. You can catch this with an assert or move the responsibility up to the caller (change param type) There is another point about this: This function might be an internal one where you have full control over, maybe even a lambda. There *may* be cases where you can guarantee that this is not called with negative values (the local lambda, an internal class function only locally accessible, ...) but normally you either assert-and-cast or change the param type leading to e.g.: for(int i=0; i<42;i++) foo(static_cast<unsigned>(i)); The cast can be easily validated locally, no assert required.
coz you just shot yourself in the foot and you don't know where the gun is (carrying a concealed weapon is a criminal offense in the US AFAIK ;-) )]. If you do not suppress the warnings (after careful investigation) how do you find real issues highlighted by warnings? Using your metaphor: How do you find the serial killer with the gun if everyone has a gun and shoots wildly around? We should have a dynamic_assert b.t.w., i.e. the converse of static_assert (in C++, not the C macro).
I prefer something like this: template
On 27/11/2018 03:21, Alexander Grund wrote:
I might agree with the size_t mistake: Using it for sizes may be ok, but as an index type might have been wrong.
As an index for a vector or array which cannot have a valid index below zero, it's perfectly fine. And it's the only way you could index arrays with INT_MAX < size <= SIZE_MAX. The only case where it falls short is if you want to decrement below zero, as in: for (size_t i = s.size() - 1; i >= 0; --i) { /* whoops */ } (And a good compiler will warn you about this as well.) This is readily solvable, however: for (size_t i = s.size(); i > 0; --i) { /* use i-1 */ } (Or, of course, using iterators and ranges.)
Gavin Lambert wrote:
On 27/11/2018 03:21, Alexander Grund wrote:
I might agree with the size_t mistake: Using it for sizes may be ok, but as an index type might have been wrong.
As an index for a vector or array which cannot have a valid index below zero, it's perfectly fine.
It's not perfectly fine, because you can pass a negative index to it and there's no way to check for that (from within the function). If you take a signed type, you can assert. It seems that we need to go over this at least once per year.
On 27/11/2018 12:05, Peter Dimov wrote:
Gavin Lambert wrote:
As an index for a vector or array which cannot have a valid index below zero, it's perfectly fine.
It's not perfectly fine, because you can pass a negative index to it and there's no way to check for that (from within the function). If you take a signed type, you can assert.
You cannot pass a negative index to it without a warning at the call site, so you have to fix it there anyway. Besides, in C++20 signed-unsigned integer conversion will be officially required to be 2s complement (and prior to this all major platforms implement it that way anyway). In this case for any non-massive array you're fairly safe (esp. when someone fixes the call site) with a simple: assert(i < size()); (Whereas with a signed index you would have to also check for negative values.)
Gavin Lambert wrote:
In this case for any non-massive array you're fairly safe (esp. when someone fixes the call site) with a simple:
assert(i < size());
That is correct. If you have an upper bound, `i < n` for unsigned `i` is equivalent to `i >= 0 && i < n` with a signed `i`, so in this specific case you can use either. Signed is still preferable though because it's less surprising. `x > -1` gives you the normal answer, for instance. But, you'll say, this will be a warning. Well yes, this specific case will be, but not all will. Some signed/unsigned mismatches don't warn on purpose because there are too many false positives (https://godbolt.org/z/c1rzjS), and in some cases, such as with the difference between (a-b)/(c-d) and (b-a)/(d-c), unsigned finds a way to ruin your day without any signed/unsigned mismatches at all. So it's a long-standing guideline to never use `unsigned`, except for bitwise operations and modular arithmetic. For numbers, signed is the way to go.
On Mon, Nov 26, 2018 at 3:44 PM Gavin Lambert via Boost < boost@lists.boost.org> wrote:
On 27/11/2018 12:05, Peter Dimov wrote:
Gavin Lambert wrote:
As an index for a vector or array which cannot have a valid index below zero, it's perfectly fine.
It's not perfectly fine, because you can pass a negative index to it and there's no way to check for that (from within the function). If you take a signed type, you can assert.
You cannot pass a negative index to it without a warning at the call site, so you have to fix it there anyway.
If you have: void f( unsigned ); void g( int x ) { f(x); } I don't think you'll get a warning. But I might be wrong, so let's say you do get a warning, so you do: void g( int x ) { f( static_cast<unsigned>(x) ); } How does this help in detecting the logic error of passing a negative x to g? Can you quantify the exact mechanics by which this cast makes your code safer? By what measure is the code less safe without the cast?
On 27/11/2018 16:31, Emil Dotchevski wrote:
If you have:
void f( unsigned );
void g( int x ) { f(x); }
I don't think you'll get a warning.
Hmm, that's true. That seems worrisome. You do get a warning if you try to pass a negative constant value (eg. calling f(-2)). And I agree that you should not get a warning when passing a positive constant value even where it would normally assume a signed type. But for the case where you're passing a parameter or other variable of the wrong type, it really should have at least the option to generate a warning, even if it's not enabled by default.
But I might be wrong, so let's say you do get a warning, so you do:
void g( int x ) { f( static_cast<unsigned>(x) ); }
How does this help in detecting the logic error of passing a negative x to g? Can you quantify the exact mechanics by which this cast makes your code safer? By what measure is the code less safe without the cast?
By itself, it doesn't help; you're correct. But that wasn't what I was saying; I was saying that the warning would help you to find that code so that you can then rewrite g to one of the following: void g( unsigned x ) { f(x); } void g( int x ) { assert(x >= 0); f(unsigned(x)); // or static_cast, if you prefer } Both of these are correct. Performing conversion (regardless of whether it is implicit or explicit) without the assert is incorrect. Performing implicit conversion with the assert is safe, but not easily distinguishable from the original unsafe code unless the compiler can inspect the assert even in NDEBUG builds. And again, if someone later refactors f to have a different parameter type (which is not able to exactly represent every possible value of 'unsigned'), then either of these *should* cause a warning to be raised again, requiring someone to look at it. (Though as you noted before, that doesn't appear to currently be the case. That seems like a compiler defect.)
On Mon, Nov 26, 2018 at 9:30 PM Gavin Lambert via Boost < boost@lists.boost.org> wrote:
But for the case where you're passing a parameter or other variable of the wrong type, it really should have at least the option to generate a warning
The problem is that implicit conversions happen a lot in C++ and it is wrong for the compiler to warn you about things you should know. By your logic this code should generate a warning: short x; ++x; Semantically, ++x contains an implicit conversion from int to short, which can result in loss of data. Same is true about signed/unsigned implicit conversions. Do you want a warning in this case? unsigned x; int y; x += y; You're adding a signed int to an unsigned int, that seems so dangerous! You better cast, that would be so much safer. :)
But I might be wrong, so let's say you do get a warning, so you do:
void g( int x ) { f( static_cast<unsigned>(x) ); }
How does this help in detecting the logic error of passing a negative x to g? Can you quantify the exact mechanics by which this cast makes your code safer? By what measure is the code less safe without the cast?
By itself, it doesn't help; you're correct. But that wasn't what I was saying; I was saying that the warning would help you to find that code so that you can then rewrite g to one of the following:
void g( unsigned x ) { f(x); }
void g( int x ) { assert(x >= 0); f(unsigned(x)); // or static_cast, if you prefer }
I know, but it is still a valid question to ask what is the role of the cast. Does it make the program more readable? In what way is your version preferable? To me it seems like a potential bug, introduced by a programmer who is unsure about implicit conversions in C. Semantically, I can't imagine why I would want the call to f to always cast to unsigned. Consider: void f( unsigned ); void f( int ); //Later we add this overload void g( int x ) { assert(x>=0); f(unsigned(x)); }
Both of these are correct. Performing conversion (regardless of whether it is implicit or explicit) without the assert is incorrect. Performing implicit conversion with the assert is safe, but not easily distinguishable from the original unsafe code
You keep saying the original code is unsafe. How is the implicit conversion less safe than the cast?
On 28/11/2018 10:01, Emil Dotchevski wrote:
Same is true about signed/unsigned implicit conversions. Do you want a warning in this case?
Yes. They are not compatible types, and so conversion should be treated with suspicion by default. As should hopefully be obvious, implicit narrowing conversions are never a good idea. Everyone recognises that conversion from intN_t to uintM_t or from uintN_t to intM_t where N > M is a narrowing conversion, and this should have a warning. This is not controversial. Similarly everyone's happy that conversion from uintN_t to intM_t where N < M is *not* a narrowing conversion and this can be done implicitly, with no warning. What is less obvious and more controversial but is still true: conversion from intN_t to uintM_t or uintN_t to intM_t where N = M is *also* a narrowing conversion (both ways), because there are values in one that are not expressible in the other. This is *slightly* better than the first case because the conversion is at least perfectly reversible -- there is no loss or problem if you convert, store, and then convert back before use -- as long as you never try to use the value in the "wrong" type, or you have sufficient assurance that the actual value falls within the common range of both types. (An assert helps, but only if it happens to be hit with the bad input in a debug build where someone reports the failure. Which will not catch all cases.) And similarly intN_t to uintM_t where N < M is still always a narrowing conversion, for the entire range of negative values. The conversion is "safe" only if you have assurance that you don't have a negative value. (Or if you do have a negative value and you're trying to do bit-twiddling for performance reasons with an appropriate algorithm, which generally assumes 2's complement representation and thus relies on implementation-defined behaviour. Much safety is sacrificed on the altar of performance. But even there it can and should be explicit about the casting, since that doesn't impact performance.)
I know, but it is still a valid question to ask what is the role of the cast. Does it make the program more readable? In what way is your version preferable?
It's an assurance (at least in theory) that someone is changing the type intentionally rather than unintentionally. It's the exact same reason that people are encouraged to make conversion constructors and operators explicit, especially for potentially narrowing conversions.
You keep saying the original code is unsafe. How is the implicit conversion less safe than the cast?
See above. One shows intent, the other does not.
On Tue, Nov 27, 2018 at 3:44 PM Gavin Lambert via Boost < boost@lists.boost.org> wrote:
As should hopefully be obvious, implicit narrowing conversions are never a good idea.
short f(); short x = f(); short y = x+1; //Implicit conversion from int to short. Do you think the compiler should warn in this case? Is the version below better? Safer? short y = static_cast<short>(x+1);
On 28/11/2018 13:56, Emil Dotchevski wrote:
As should hopefully be obvious, implicit narrowing conversions are never a good idea.
short f();
short x = f(); short y = x+1; //Implicit conversion from int to short.
Do you think the compiler should warn in this case? Is the version below better? Safer?
short y = static_cast<short>(x+1);
No, that's a language defect. In that context "1" should be of type short (because as a literal, it can be) and then it should apply "short operator+(short, short)", and then there would be no problem (assuming it doesn't overflow). (Or as a general rule, the literal "1" should actually be type int8_t, because that's the smallest signed type which includes its value, and can always be safely implicitly extended to a larger type based on usage.) I am aware that the language does not work like this currently. This is one of the things that make it so easy to produce incorrect code.
On Tue, Nov 27, 2018 at 6:47 PM Gavin Lambert via Boost < boost@lists.boost.org> wrote:
On 28/11/2018 13:56, Emil Dotchevski wrote:
As should hopefully be obvious, implicit narrowing conversions are
never
a good idea.
short f();
short x = f(); short y = x+1; //Implicit conversion from int to short.
Do you think the compiler should warn in this case? Is the version below better? Safer?
short y = static_cast<short>(x+1);
No, that's a language defect. In that context "1" should be of type short (because as a literal, it can be) and then it should apply "short operator+(short, short)", and then there would be no problem (assuming it doesn't overflow).
Your position is self-contradictory. You're saying that under short operator+(short,short), it's fine to assume no overflow, without a warning, even though short overflow would produce incorrect y, but under int operator+(short,short) it's not fine to truncate the int result to short, without a warning, even though it would produce (on most platforms, exactly the same) incorrect y.
I am aware that the language does not work like this currently. This is one of the things that make it so easy to produce incorrect code.
The language currently works correctly in this case, thanks god. The idea that if x and y are of type short, then x+y should be of type short is utterly wrong. When you add two shorts, you get an int, because computing the int result does not use more resources than computing the short result. And, of course, it should be truncated when stored in a short. And, of course, it would be lame to warn about the potential loss of data. It's just x+1, it's not that dangerous. :)
On 28/11/2018 16:28, Emil Dotchevski wrote:
Your position is self-contradictory. You're saying that under short operator+(short,short), it's fine to assume no overflow, without a warning, even though short overflow would produce incorrect y, but under int operator+(short,short) it's not fine to truncate the int result to short, without a warning, even though it would produce (on most platforms, exactly the same) incorrect y.
"int operator+(short, short)" simply shouldn't exist, because it's type-inconsistent. (For example the others include "int operator+(int, int)" and "double operator+(double, double)" and you would generally expect things like "string operator+(string, string)" as well.) Granted that some instruction sets provides things like 32*32=64, but C++ ignores those anyway. If you are expecting a longer result than a short, then you should cast one of the operands to a larger type, just as you would if you were expecting a longer result than an int. Why are "integers smaller than int" different from every other type in the language? Integer promotion rules make sense in Ye Olde C with indeterminate type varargs and unspecified argument prototypes. They don't make much sense once you move beyond that.
The language currently works correctly in this case, thanks god. The idea that if x and y are of type short, then x+y should be of type short is utterly wrong. When you add two shorts, you get an int, because computing the int result does not use more resources than computing the short result.
That is not always true. There are assembly instructions which can operate on values smaller than a register width, and if the compiler were able to use them then it might be able to use fewer registers for some code patterns, which might reduce use of memory (or cache), which can be a net performance benefit. Granted this would probably be rare; processors are generally better at dealing with values in full register widths.
And, of course, it should be truncated when stored in a short. And, of course, it would be lame to warn about the potential loss of data. It's just x+1, it's not that dangerous. :)
Until it is.
On Tue, Nov 27, 2018 at 8:57 PM Gavin Lambert via Boost < boost@lists.boost.org> wrote:
On 28/11/2018 16:28, Emil Dotchevski wrote:
The language currently works correctly in this case, thanks god. The idea that if x and y are of type short, then x+y should be of type short is utterly wrong. When you add two shorts, you get an int, because computing the int result does not use more resources than computing the short result.
That is not always true. There are assembly instructions which can operate on values smaller than a register width, and if the compiler were able to use them then it might be able to use fewer registers for some code patterns.
The optimizer has to use int-size registers only if the int result is assigned to int. If it's assigned to a short, the optimizer can and does use short-size registers.
Am 27.11.18 um 22:01 schrieb Emil Dotchevski via Boost:
The problem is that implicit conversions happen a lot in C++ and it is wrong for the compiler to warn you about things you should know. By your logic this code should generate a warning:
short x;
++x;
Semantically, ++x contains an implicit conversion from int to short, which can result in loss of data. In this case I expect no warning. The initial conversion to int is a (strange) detail of C/C++. If I explicitly choose to operate on a short, why would I want an int? if "x" were an int, should it be promoted to a wider type on addition? If not, where is the consistency? So: Compiler can see *intend* here and not produce a warning. Same is true about signed/unsigned implicit conversions. Do you want a warning in this case?
unsigned x; int y;
x += y;
You're adding a signed int to an unsigned int, that seems so dangerous! You better cast, that would be so much safer. :) It is not about safer per se, it is about intend. If you cast explicitly you show intend (implying you checked the preconditions or have own assumptions on the value range in your domain), using the implicit cast *may* be a bug for all the compiler knows: "y = -(x+1)" -> Boom So it's all about: I know what I'm doing instead of I may know what I'm doing. To me it seems like a potential bug, introduced by a programmer who is unsure about implicit conversions in C. Semantically, I can't imagine why I would want the call to f to always cast to unsigned. Consider:
void f( unsigned ); void f( int ); //Later we add this overload
void g( int x ) { assert(x>=0); f(unsigned(x)); } Again: Intend. What if the (original) call to unsigned f is the correct one? Not using the cast now changes other, seemingly unrelated code just by adding a new function and may hence *introduce* a bug rather than avoid it.
On Mon, Nov 26, 2018 at 12:12 AM Alexander Grund via Boost < boost@lists.boost.org> wrote:
Am 25.11.18 um 04:31 schrieb Emil Dotchevski via Boost:
Here is my argument again:
1) A cast has different semantics than an implicit conversion.
2) If the implicit conversion is correct, replacing it with a cast to silence a warning is incorrect.
Which of these two points do you think is false?
I'd argue both are false because they are to general. If you add "may" then these arguments "may" be true.
I'll include your further reply:
The reason is that they have the wrong mindset when approaching the
The first is a fact, the second is basic logic. Both are true. problem.
They think: I get a warning, because the compiler doesn't know that this signed value is always positive, but I know that it is, so I'm going to cast, to tell the compiler that I know what I'm doing.
A better reasoning is: I'm converting a signed value to unsigned value, which will not be correct if the signed value is negative. With this in mind, you'll notice that the cast is utterly useless in catching the
actual
error the compiler is warning you about.
I can't really follow you here: The compiler not knowing that the signed value (which per definition CAN in general be negative) is always positive and alerting you of this is the whole point. IF you know absolutely sure that this signed value is always non-negative, then why not: a) Change the parameter type to unsigned to communicate this as a precondition of the function
This is not what unsigned is for. You should generally use signed ints even if the value can't be negative.
I don't see where this cast is wrong. It does the same as the implicit cast but explicit and hence conveys *intend*.
Fact: the cast does not do the same thing as the implicit conversion. (The cast converts to the type you specify in the cast. The implicit conversion converts to the target type.)
Disabling this warning might hide instances where such implicit conversions are in fact wrong.
What you're saying about disabling the warning is equally true about the cast. You're thinking that if you look at it and say it's ok, then the cast can't be wrong, but that's false. If you change the compiler or the platform, or under maintenance, the cast may become the cause of a bug, which it will then hide because you've told the compiler you know what you're doing.
I also see a contradiction in your argument: "this signed value is always positive, [...] I know that it is" is a valid reason to "converting a signed value to unsigned value, which will not be correct if the signed value is negative". So why is "the cast [...] utterly useless in catching the actual error"? If you know (or can ensure) the precondition then there is no error and you putting in the cast states that you thought about it.
unsigned f(); void g( int x ) { if( static_cast<unsigned>(x) < f() ) //x is guaranteed to be >=0 .... } Elsewhere: g(-1); Would the cast or the comment catch this bug? No. Compare with: void g( int x ) { assert(x>=0); if( x < f() ) .... } First, the assert will catch the bug, while the cast will hide it. Secondly, the implicit conversion is both correct and more likely to remain correct under maintenance, because the compiler will pick the best conversion if the types change. And if you worry about truncation under maintenance, same reasoning applies: add asserts rather than comments or casts.
In conclusion: These warnings are about suspicious code that *needs* to be inspected. After inspection relevant action must be taken which should result in the warning being solved or suppressed so the next reviewer does not need to inspect it again unless surrounding code has changed.
I'm not against doing something that suppresses the warning without changing the program semantically, though these things tend to be ugly and difficult to port, so they're impractical. But adding a cast changes the behavior of the program. It is not a good idea to break a correct program in order to suppress a bogus warning.
On 27/11/2018 09:40, Emil Dotchevski wrote:
This is not what unsigned is for. You should generally use signed ints even if the value can't be negative.
Why? Other than being less typing (which is, I think, the real mistake), what reason is there for this?
Fact: the cast does not do the same thing as the implicit conversion.
(The cast converts to the type you specify in the cast. The implicit conversion converts to the target type.)
No, the implicit conversion promotes to *some* type (usually one of int, unsigned, int64_t, or uint64_t), which is not necessarily the type of either.
AMDG On 11/26/2018 03:15 PM, Gavin Lambert via Boost wrote:
On 27/11/2018 09:40, Emil Dotchevski wrote:
This is not what unsigned is for. You should generally use signed ints even if the value can't be negative.
Why? Other than being less typing (which is, I think, the real mistake), what reason is there for this?
signed integer overflow has undefined behavior. unsigned integers wrap. This means that signed integers give the compiler more scope for optimization and/or runtime checks. In Christ, Steven Watanabe
On 27/11/2018 11:29, Steven Watanabe wrote:
signed integer overflow has undefined behavior. unsigned integers wrap. This means that signed integers give the compiler more scope for optimization and/or runtime checks.
I'm not aware of any compilers that actually do that, even in debug mode. (Apart from ubsan, of course, but that also checks for unsigned "overflow".) Avoiding possible undefined behaviour sounds like a good reason to avoid signed integers in general. (Except where actually required.) (Not that unintended wrapping of unsigned integers is much better, but at least it's less likely to summon Nyarlathotep.)
Gavin Lambert wrote:
On 27/11/2018 11:29, Steven Watanabe wrote:
signed integer overflow has undefined behavior. unsigned integers wrap. This means that signed integers give the compiler more scope for optimization and/or runtime checks.
I'm not aware of any compilers that actually do that, ...
On 27/11/2018 12:02, Peter Dimov wrote:
Gavin Lambert wrote:
On 27/11/2018 11:29, Steven Watanabe wrote:
signed integer overflow has undefined behavior. unsigned integers wrap. > This means that signed integers give the compiler more scope for > optimization and/or runtime checks.
I'm not aware of any compilers that actually do that, ...
That shows different optimisation (the first is "return true", the second is "return x != UINT_MAX"), but that doesn't contradict my statement.
On Tue, 27 Nov 2018 at 01:02, Peter Dimov via Boost
Gavin Lambert wrote:
I'm not aware of any compilers that actually do that, ...
Interestingly [or maybe not so interestingly] MSVC (15.8/7) is the only compiler available there [on godbolt] that does NOT produce more optimal code for the signed integer case (at any optimization level) (STL are you listening). degski -- *“If something cannot go on forever, it will stop" - Herbert Stein*
[degski]
Interestingly [or maybe not so interestingly] MSVC (15.8/7) is the only compiler available there [on godbolt] that does NOT produce more optimal code for the signed integer case (at any optimization level) (STL are you listening).
I am listening, but I am not a compiler backend dev. :-) I recommend filing a bug about the missed optimization opportunity through the C++ tab of Developer Community at https://developercommunity.visualstudio.com/spaces/62/index.html . (I've been busy filing optimizer bugs against MSVC and Clang/LLVM while working on charconv; filing bugs with clear repros is the best way to help toolsets improve.) Thanks, STL
On Tue, Nov 27, 2018 at 12:26 PM Stephan T. Lavavej via Boost < boost@lists.boost.org> wrote:
[degski]
Interestingly [or maybe not so interestingly] MSVC (15.8/7) is the only
compiler
available there [on godbolt] that does NOT produce more optimal code for the signed integer case (at any optimization level) (STL are you listening).
I am listening, but I am not a compiler backend dev. :-) I recommend filing a bug about the missed optimization opportunity through the C++ tab of Developer Community at https://developercommunity.visualstudio.com/spaces/62/index.html .
I would have thought MSVC doesn't do this optimization so it doesn't break existing code which assumes int overflow wraps around. Microsoft is usually very careful with this, and for good reasons.
[Emil Dotchevski]
I would have thought MSVC doesn't do this optimization so it doesn't break existing code which assumes int overflow wraps around. Microsoft is usually very careful with this, and for good reasons.
Seems likely. Perhaps we should have a documentation page, "Optimizations Which MSVC Intentionally Avoids". STL
On Tue, 27 Nov 2018 at 22:26, Stephan T. Lavavej
Interestingly [or maybe not so interestingly] MSVC (15.8/7) is the only compiler available there [on godbolt] that does NOT produce more optimal code for
[degski] the
signed integer case (at any optimization level) (STL are you listening).
I am listening, but I am not a compiler backend dev. :-)
I'll give the guys in the garden shed a shout [**hollering**].
I recommend filing a bug about the missed optimization opportunity through the C++ tab of Developer Community at https://developercommunity.visualstudio.com/spaces/62/index.html .
Well, it's UB, so anything can happen, so, is it a bug? It surely misses an opportunity [one compiler just returns 1 on the first function and does absolutely nothing else]. Wrapping signed int's doesn't make much sense [and mixes hardware with software, but hey, that'll go away with C++20]. I'm sympathetic to the effort [of turning it [the UB] into something that at some level does make sense]. The problem in my view is that [exactly what you've said below] people depend on it and therefore start writing non-portable code, without even knowing it. That I think is bad. Is there a warning? I bet you clang-cl will make a mess of the code that works so well with MSVC. Please confirm that you still want me to file a bug, iff yes, I'll do so. degski -- *“If something cannot go on forever, it will stop" - Herbert Stein*
Since this is very likely to be by design forever, I think you can skip filing a bug.
STL
From: degski
Interestingly [or maybe not so interestingly] MSVC (15.8/7) is the only compiler available there [on godbolt] that does NOT produce more optimal code for the signed integer case (at any optimization level) (STL are you listening).
I am listening, but I am not a compiler backend dev. :-) I'll give the guys in the garden shed a shout [**hollering**]. I recommend filing a bug about the missed optimization opportunity through the C++ tab of Developer Community at https://developercommunity.visualstudio.com/spaces/62/index.htmlhttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdevelopercommunity.visualstudio.com%2Fspaces%2F62%2Findex.html&data=02%7C01%7Cstl%40exchange.microsoft.com%7Cbe16b2ff5c1840eef80c08d655dcb523%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636790803755931797&sdata=XGFh3Was%2FA0sEzMysPq%2FUMTpnjy6RoQ%2BIePzPNDrVX4%3D&reserved=0 . Well, it's UB, so anything can happen, so, is it a bug? It surely misses an opportunity [one compiler just returns 1 on the first function and does absolutely nothing else]. Wrapping signed int's doesn't make much sense [and mixes hardware with software, but hey, that'll go away with C++20]. I'm sympathetic to the effort [of turning it [the UB] into something that at some level does make sense]. The problem in my view is that [exactly what you've said below] people depend on it and therefore start writing non-portable code, without even knowing it. That I think is bad. Is there a warning? I bet you clang-cl will make a mess of the code that works so well with MSVC. Please confirm that you still want me to file a bug, iff yes, I'll do so. degski -- “If something cannot go on forever, it will stop" - Herbert Stein
AMDG On 11/26/2018 03:57 PM, Gavin Lambert via Boost wrote:
On 27/11/2018 11:29, Steven Watanabe wrote:
signed integer overflow has undefined behavior. unsigned integers wrap. This means that signed integers give the compiler more scope for optimization and/or runtime checks.
I'm not aware of any compilers that actually do that, even in debug mode.
g++ -ftrapv
(Apart from ubsan, of course, but that also checks for unsigned "overflow".)
Sure, but that generates false positives. The signed overflow check never generates false positives, because it's always a bug.
Avoiding possible undefined behaviour sounds like a good reason to avoid signed integers in general. (Except where actually required.)
I strongly disagree. Given that some operation has no sensible behavior (for example a + b where the mathematical sum does not fit in the given type), there are four possible ways to solve the problem: a) Define the behavior to do something b) Assume that it never happens c) Check for it and issue an error d) Make it syntactically impossible (d) is the ideal solution, but is often infeasible. In the case of integer addition, it means returning a type which is large enough to hold any possible sum. Python uses this approach with int and long, but it's not really practical in C++ because of the performance implications. (a) is what you get by using unsigned. I consider this the worst possible choice because it combines the drawbacks of (b) and (c) without any of their benefits. It is an acceptable compromise if it is enabled by compiler options (i.e. -fwrapv) rather than being required semantics. (b) may give better performance, but would be unacceptable from the point of view of correctness, if it weren't for the fact that to enable it, you have to define the semantics as undefined behavior which doesn't exclude using (c) for debug builds. (c) is what safe languages use and is good from the standpoint of correctness, but may have a significant performance penalty. The most important feature of undefined behavior is that it allows you to choose the behavior that you want instead of hard-coding it into the program.
(Not that unintended wrapping of unsigned integers is much better, but at least it's less likely to summon Nyarlathotep.)
You're contradicting yourself. Extremely bizarre behavior from UB is usually a result of the optimizer. If the optimizer doesn't get involved, there's no real difference between the behavior of signed overflow and unsigned overflow (assuming that neither is intended). In Christ, Steven Watanabe
On Tue, Nov 27, 2018 at 7:18 PM Steven Watanabe via Boost
On 11/26/2018 03:57 PM, Gavin Lambert via Boost wrote:
Avoiding possible undefined behaviour sounds like a good reason to avoid signed integers in general. (Except where actually required.)
I strongly disagree. Given that some operation has no sensible behavior (for example a + b where the mathematical sum does not fit in the given type), there are four possible ways to solve the problem:
a) Define the behavior to do something b) Assume that it never happens c) Check for it and issue an error d) Make it syntactically impossible
(b) may give better performance, but would be unacceptable from the point of view of correctness, if it weren't for the fact that to enable it, you have to define the semantics as undefined behavior which doesn't exclude using (c) for debug builds.
The most important feature of undefined behavior is that it allows you to choose the behavior that you want instead of hard-coding it into the program.
I disagree. I think, the choice is only there as long as you target a specific language implementation, or the very few of them which offer the choices you care about. If you're trying to write portable code, like we most often do in Boost, for example, you really don't have a choice and have to assume the behavior is really undefined. And this is the worst thing because it is a constant source of bugs and is not useful to the programmer. The ironic thing is that you can't even check for a possible overflow in portable C++ since "it doesn't happen" by the language rules and the compiler is allowed to subvert or completely discard your checks. Over the years I have grown a habit of avoiding signed integers where not genuinely needed. On the topic of performance gains, yes, there are some, but I'm not really convinced they are significant enough in general and can't be achieved by careful coding where truly needed.
Am 27.11.2018 um 19:58 schrieb Andrey Semashev via Boost:
I disagree. I think, the choice is only there as long as you target a specific language implementation, or the very few of them which offer the choices you care about. If you're trying to write portable code, like we most often do in Boost, for example, you really don't have a choice and have to assume the behavior is really undefined. And this is the worst thing because it is a constant source of bugs and is not useful to the programmer. The ironic thing is that you can't even check for a possible overflow in portable C++ since "it doesn't happen" by the language rules and the compiler is allowed to subvert or completely discard your checks. Over the years I have grown a habit of avoiding signed integers where not genuinely needed.
On the topic of performance gains, yes, there are some, but I'm not really convinced they are significant enough in general and can't be achieved by careful coding where truly needed.
Exactly my position in this regard. If only there were a signed integral type with modulo arithmetic in C++. In other words: the raw underlying hardware implementation without the UB semantics demanded by C++. Ciao Dani
AMDG On 11/27/2018 12:14 PM, Daniela Engert via Boost wrote:
Am 27.11.2018 um 19:58 schrieb Andrey Semashev via Boost:
<snip> Exactly my position in this regard. If only there were a signed integral type with modulo arithmetic in C++. In other words: the raw underlying hardware implementation without the UB semantics demanded by C++.
That would be useful if two's complement arithmetic were really the correct behavior. Since that's rarely the case, it would only serve to mask the problem. In Christ, Steven Watanabe
On Tue, Nov 27, 2018 at 11:14 AM Daniela Engert via Boost < boost@lists.boost.org> wrote:
Am 27.11.2018 um 19:58 schrieb Andrey Semashev via Boost:
I disagree. I think, the choice is only there as long as you target a specific language implementation, or the very few of them which offer the choices you care about. If you're trying to write portable code, like we most often do in Boost, for example, you really don't have a choice and have to assume the behavior is really undefined. And this is the worst thing because it is a constant source of bugs and is not useful to the programmer. The ironic thing is that you can't even check for a possible overflow in portable C++ since "it doesn't happen" by the language rules and the compiler is allowed to subvert or completely discard your checks. Over the years I have grown a habit of avoiding signed integers where not genuinely needed.
On the topic of performance gains, yes, there are some, but I'm not really convinced they are significant enough in general and can't be achieved by careful coding where truly needed.
Exactly my position in this regard. If only there were a signed integral type with modulo arithmetic in C++. In other words: the raw underlying hardware implementation without the UB semantics demanded by C++.
The result would be: 1) Compilers would generate less optimal code, as demonstrated by this discussion. 2) You will still get a mathematically incorrect result most of the time.
Am 27.11.2018 um 21:02 schrieb Emil Dotchevski via Boost:
On Tue, Nov 27, 2018 at 11:14 AM Daniela Engert via Boost < boost@lists.boost.org> wrote:
If only there were a signed integral type with modulo arithmetic in C++. In other words: the raw underlying hardware implementation without the UB semantics demanded by C++.
The result would be:
1) Compilers would generate less optimal code, as demonstrated by this discussion. 2) You will still get a mathematically incorrect result most of the time.
Did you actually bother to think about my proposal? I don't want to take away 'int' (as it is specified in the language). I want an *additional* signed integral type with a sane mathematical definition: an abelian group with modulus. And it should offer a widening multiplication N * N -> 2N in addition to the regular one N * N -> N. The compiler should know about this type and lower it's operations to the corresponding hardware instruction (in most cases a single one). Ciao Dani
Am 28.11.2018 um 17:48 schrieb Peter Dimov via Boost:
Daniela Engert wrote:
I want an *additional* signed integral type with a sane mathematical definition: an abelian group with modulus.
In what cases would you use this type?
Signal Processing. For example, a FIR filter requires the widening multiplication N * N -> 2N, followed by a modulo accumulation 2N + 2N -> 2N. This MAC (multiply and accumulate) step will repeat M times (with M the FIR length). The accumulation will most likely wrap around multiple times amidst the FIR calculation an then totally unwrap back into the primary value domain when the end of the calculation is reached. The output value will then probably be a shift 2N by K -> N, where K is selected such that the scaling is correct and takes the SNR gain due to the filtering into account. There you have the most basic operations in digital signal processing which simply can't be expressed in a meaningful and efficient way with current C++ language features. Implementing this stuff in assembly is much easier than trying to achive this in C++. Ciao Dani
On 11/28/18 8:34 AM, Daniela Engert via Boost wrote:
Am 27.11.2018 um 21:02 schrieb Emil Dotchevski via Boost:
On Tue, Nov 27, 2018 at 11:14 AM Daniela Engert via Boost < boost@lists.boost.org> wrote:
If only there were a signed integral type with modulo arithmetic in C++. In other words: the raw underlying hardware implementation without the UB semantics demanded by C++.
The result would be:
1) Compilers would generate less optimal code, as demonstrated by this discussion. 2) You will still get a mathematically incorrect result most of the time.
Did you actually bother to think about my proposal? I don't want to take away 'int' (as it is specified in the language). I want an *additional* signed integral type with a sane mathematical definition: an abelian group with modulus. And it should offer a widening multiplication N * N -> 2N in addition to the regular one N * N -> N. The compiler should know about this type and lower it's operations to the corresponding hardware instruction (in most cases a single one).
Would this be of value only in the (special) case where N is a power of 2? Seems like a very special case to me. For such a special case, I'd expect to be able to insert some inline assembler to do the job as opposed to altering the core language.
Ciao Dani
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On 29/11/2018 07:19, Robert Ramey wrote:
On 11/28/18 8:34 AM, Daniela Engert wrote:
Did you actually bother to think about my proposal? I don't want to take away 'int' (as it is specified in the language). I want an *additional* signed integral type with a sane mathematical definition: an abelian group with modulus. And it should offer a widening multiplication N * N -> 2N in addition to the regular one N * N -> N. The compiler should know about this type and lower it's operations to the corresponding hardware instruction (in most cases a single one).
Would this be of value only in the (special) case where N is a power of 2? Seems like a very special case to me. For such a special case, I'd expect to be able to insert some inline assembler to do the job as opposed to altering the core language.
Here N is referring to the bit size of the type, which is "always" (barring weird architectures) a power of two. It is possible to define such a type as a library, using inline assembly where possible and with a fallback to a less efficient portable implementation otherwise. Direct compiler support is not required, though might be nice.
AMDG On 11/27/2018 11:58 AM, Andrey Semashev wrote:
On Tue, Nov 27, 2018 at 7:18 PM Steven Watanabe via Boost
wrote: On 11/26/2018 03:57 PM, Gavin Lambert via Boost wrote:
<snip> The most important feature of undefined behavior is that it allows you to choose the behavior that you want instead of hard-coding it into the program.
I disagree. I think, the choice is only there as long as you target a specific language implementation, or the very few of them which offer the choices you care about. If you're trying to write portable code, like we most often do in Boost, for example, you really don't have a choice and have to assume the behavior is really undefined.
You don't "assume that it's undefined." It /is/ undefined. A correct C++ program should never do it.
And this is the worst thing because it is a constant source of bugs and is not useful to the programmer.
My point is that it's a bug regardless. Your program is wrong regardless of whether it's formally undefined behavior or just gives the wrong answer. The only difference is in how serious the symptoms are and what tools are available for finding the problem.
The ironic thing is that you can't even check for a possible overflow in portable C++ since "it doesn't happen" by the language rules and the compiler is allowed to subvert or completely discard your checks.
That only happens if you write the check incorrectly.
Over the years I have grown a habit of avoiding signed integers where not genuinely needed.
On the topic of performance gains, yes, there are some, but I'm not really convinced they are significant enough in general and can't be achieved by careful coding where truly needed.
In Christ, Steven Watanabe
On 11/27/18 10:31 PM, Steven Watanabe via Boost wrote:
On 11/27/2018 11:58 AM, Andrey Semashev wrote:
On Tue, Nov 27, 2018 at 7:18 PM Steven Watanabe via Boost
wrote: On 11/26/2018 03:57 PM, Gavin Lambert via Boost wrote:
<snip> The most important feature of undefined behavior is that it allows you to choose the behavior that you want instead of hard-coding it into the program.
I disagree. I think, the choice is only there as long as you target a specific language implementation, or the very few of them which offer the choices you care about. If you're trying to write portable code, like we most often do in Boost, for example, you really don't have a choice and have to assume the behavior is really undefined.
You don't "assume that it's undefined." It /is/ undefined.
Sure, that's my point. I was merely pointing out that there's no "choice" to make.
A correct C++ program should never do it.
I wouldn't assume about user's code.
And this is the worst thing because it is a constant source of bugs and is not useful to the programmer.
My point is that it's a bug regardless. Your program is wrong regardless of whether it's formally undefined behavior or just gives the wrong answer. The only difference is in how serious the symptoms are and what tools are available for finding the problem.
Having the wrap semantics would be useful, at least for being able to portably detect an overflow. That ability alone allows to implement whatever other behavior you might need. OTOH, having overflow undefined doesn't allow you to do anything with it.
The ironic thing is that you can't even check for a possible overflow in portable C++ since "it doesn't happen" by the language rules and the compiler is allowed to subvert or completely discard your checks.
That only happens if you write the check incorrectly.
Until the language mandates two's complement signed integers (in the yet-to-come C++20), I don't think there is a correct way. At least nothing easy enough comes to mind.
On Tue, Nov 27, 2018 at 11:59 AM Andrey Semashev via Boost < boost@lists.boost.org> wrote:
Having the wrap semantics would be useful, at least for being able to portably detect an overflow. That ability alone allows to implement whatever other behavior you might need. OTOH, having overflow undefined doesn't allow you to do anything with it.
You are right, there are real, tangible benefits to using languages that define behavior for all operations. Nobody is arguing otherwise.
On Mon, Nov 26, 2018 at 2:29 PM Steven Watanabe via Boost < boost@lists.boost.org> wrote:
AMDG
On 11/26/2018 03:15 PM, Gavin Lambert via Boost wrote:
On 27/11/2018 09:40, Emil Dotchevski wrote:
This is not what unsigned is for. You should generally use signed ints even if the value can't be negative.
Why? Other than being less typing (which is, I think, the real mistake), what reason is there for this?
signed integer overflow has undefined behavior. unsigned integers wrap. This means that signed integers give the compiler more scope for optimization and/or runtime checks.
I thought that signed integer overflow is implementation-defined. But yes, runtime checks, and not just for the compiler, also in asserts. Given that signed types implicitly convert to unsigned types, it is precisely when a number must not be negative that you should use a signed int, so that you can actually detect bugs when it is negative. Another reason is that unsigned ints are just wrong sometimes. For example, if you want to represent width, and you reason that it can't be negative, and you do: unsigned width; On the other hand, coordinates can reasonably be negative, so you use int: int x; Then you write something like: if( x-width<5 ) Oops.
On Mon, Nov 26, 2018 at 2:15 PM Gavin Lambert via Boost < boost@lists.boost.org> wrote:
Fact: the cast does not do the same thing as the implicit conversion.
(The cast converts to the type you specify in the cast. The implicit conversion converts to the target type.)
No, the implicit conversion promotes to *some* type (usually one of int, unsigned, int64_t, or uint64_t), which is not necessarily the type of either.
In the end, the result does get converted to the target value type, since it is being stored in a variable of that type. I don't see your point though, are you saying that implicit conversions are dangerous and we should avoid them, using casts every time an implicit conversion might occur?
Am 26.11.18 um 21:40 schrieb Emil Dotchevski via Boost:
1) A cast has different semantics than an implicit conversion.
2) If the implicit conversion is correct, replacing it with a cast to silence a warning is incorrect. The first is a fact, the second is basic logic. Both are true. Consider: int x=1; unsigned y=2; if(static_cast<unsigned>(x) < y) ... Where does this cast change any semantics? Implicit conversion changes x to unsigned *in this case*, the cast explicitly does the same and hence has the same semantics and is correct too which contradicts both points. Of course this may no longer be true if code is refactored but for refactoring you always need to check what you do. (The cast converts to the type you specify in the cast. The implicit conversion converts to the target type.) So? You can even use both: short x=1; unsigned y=2; if(static_cast<unsigned short>(x) < y) What you're saying about disabling the warning is equally true about the cast. You're thinking that if you look at it and say it's ok, then the cast can't be wrong, but that's false. If you change the compiler or the platform, or under maintenance, the cast may become the cause of a bug, which it will then hide because you've told the compiler you know what you're doing. The only valid argument is "under maintenance", the others are wrong: If you locally checked that the implicit cast is correct (e.g. by asserts, logic, ...) and make it explicit it will be correct too UNLESS types are changed afterwards without checking the cast. The same applies to the implicit cast! unsigned f();
void g( int x ) { if( static_cast<unsigned>(x) < f() ) //x is guaranteed to be >=0 .... }
Elsewhere:
g(-1);
That is the example I have brought: Either you can control ALL calls of g because it is local enough so you can ensure its argument is non-negative the first version is correct. Otherwise the following is better: void g( int x ) { assert(x>=0); if( static_cast<unsigned>(x) < f() ) .... } The cast will tell the compiler, that this is correct. So the compiler will not alert everyone that it may not be correct. Without the cast and assert the code will be wrong exactly for the same example you brought: `g(-1)` and as there are 100s of warnings of that kind you ignore them so you don't find this one case where it has been correct...
First, the assert will catch the bug, while the cast will hide it. Secondly, the implicit conversion is both correct and more likely to remain correct under maintenance, because the compiler will pick the best conversion if the types change. It won't pick the "best" conversion. Why is for "int < unsigned" to conversion to unsigned the "best"? I'm not against doing something that suppresses the warning without changing the program semantically, though these things tend to be ugly and difficult to port, so they're impractical. How is the assert-and-cast approach ugly and difficult to port? But adding a cast changes the behavior of the program. It is not a good idea to break a correct program in order to suppress a bogus warning.
If you are so concerned use own functions: `auto unsigned_cast(T v){
assert(v>=0); static_cast
signed integer overflow has undefined behavior. unsigned integers wrap. This means that signed integers give the compiler more scope for optimization and/or runtime checks.
This optimization scope is quite narrow and now you have possibly introduced UB into your program.
It's not perfectly fine, because you can pass a negative index to it and there's no way to check for that (from within the function). If you take a signed type, you can assert.
MSVC warns when passing a signed to an unsigned param: https://godbolt.org/z/1ydXuL So you would need to assert-and-cast at the call site but you don't need such asserts for e.g. `for(int i=0; i<10; i++) a.set(i) = i;`
I don't see your point though, are you saying that implicit conversions are dangerous and we should avoid them, using casts every time an implicit conversion might occur?
Only for signed<>unsigned conversions. Widening implicit conversions are fine as they are safe.
void f( unsigned );
void g( int x ) { f(x); } I don't think you'll get a warning. But I might be wrong, so let's say you do get a warning, so you do: void g( int x ) { f( static_cast<unsigned>(x) ); }
MSVC does: https://godbolt.org/z/1ydXuL And the 2nd one is missing reasoning why this cast is correct (whether explicit or implicit) and potentially an assert. With the given information the original code is wrong already.
On 24/11/2018 08:58, Emil Dotchevski wrote:
I'm guessing you want an example where the explicit conversion is incorrect, and you want to use an implicit conversion even though you get a warning. Here:
unsigned f();
void g( int x ) { if( x < f() ) //warning C4018: '<': signed/unsigned mismatch { .... } }
So you change that to:
if( static_cast<unsigned>(x) < f() )
Then under refactoring both f and g get changed:
unsigned long f();
void g( long x ) { if( static_cast<unsigned>(x) < f() ) { .... } }
auto xu = std::make_unsigned_t
assert(x>=0);
That helps -- assuming that people actually check asserts, which isn't always the case.
On Sun, Nov 25, 2018 at 3:05 PM Gavin Lambert via Boost < boost@lists.boost.org> wrote:
On 24/11/2018 08:58, Emil Dotchevski wrote:
I'm guessing you want an example where the explicit conversion is incorrect, and you want to use an implicit conversion even though you
get a
warning. Here:
unsigned f();
void g( int x ) { if( x < f() ) //warning C4018: '<': signed/unsigned mismatch { .... } }
So you change that to:
if( static_cast<unsigned>(x) < f() )
Then under refactoring both f and g get changed:
unsigned long f();
void g( long x ) { if( static_cast<unsigned>(x) < f() ) { .... } }
auto xu = std::make_unsigned_t
(x); if (xu < f()) This expresses your intent of using the unsigned version of the parameter.
You probably mean
auto xu = std::make_unsigned
assert(x>=0);
That helps -- assuming that people actually check asserts, which isn't always the case.
Ironically, in my experience the warning police doesn't care about the assert, even though it will actually catch the bugs they're trying to defend against, while the warning, silenced with a cast, won't.
On 26/11/2018 15:50, Emil Dotchevski wrote:
auto xu = std::make_unsigned_t
(x); if (xu < f()) This expresses your intent of using the unsigned version of the parameter.
You probably mean
auto xu = std::make_unsigned
::type(x);
No, I meant what I wrote.
This is similar to the implicit conversion, except it doesn't eliminate the other implicit conversion which will occur if the return type of f is of size greater than the size of x, which will likely earn you another warning.
That was intentional, because that's another decision that has to be made explicitly, with evaluation of the range constraints on f()'s return value.
Therefore you probably want
auto xu = std::make_unsigned
::type(x);
No, that's pointless. f() is already known to return an unsigned type or you wouldn't have received the original warning in the first place. You could use decltype(f()) by itself, but that doesn't really help unless you know that it is always wider then x anyway.
Fine. Do you think programmers actually write this to silence such warnings, rather than cast? Are you arguing that they should?
I don't think many currently do this, no. I wonder if they should, given your complaint, but it's probably more generally useful to encourage use of safe_numerics or similar instead.
Ironically, in my experience the warning police doesn't care about the assert, even though it will actually catch the bugs they're trying to defend against, while the warning, silenced with a cast, won't.
True. The assert is more explicit and more useful (if actually checked), but people seem unreasonably afraid of adding them for some reason.
On 26/11/2018 15:50, Emil Dotchevski wrote:
This is similar to the implicit conversion, except it doesn't eliminate
On Sun, Nov 25, 2018 at 8:52 PM Gavin Lambert via Boost < boost@lists.boost.org> wrote: the
other implicit conversion which will occur if the return type of f is of size greater than the size of x, which will likely earn you another warning.
That was intentional, because that's another decision that has to be made explicitly, with evaluation of the range constraints on f()'s return value.
Yes, so you need what I wrote (below)...
Therefore you probably want
auto xu = std::make_unsigned
::type(x); No, that's pointless. f() is already known to return an unsigned type or you wouldn't have received the original warning in the first place.
Except, obviously, I meant std::make_signed
Fine. Do you think programmers actually write this to silence such warnings, rather than cast? Are you arguing that they should?
I don't think many currently do this, no. I wonder if they should, given your complaint, but it's probably more generally useful to encourage use of safe_numerics or similar instead.
No, they shouldn't, instead they should learn the semantics of C/C++ implicit conversions and disable the noob warning.
Ironically, in my experience the warning police doesn't care about the assert, even though it will actually catch the bugs they're trying to defend against, while the warning, silenced with a cast, won't.
True. The assert is more explicit and more useful (if actually checked), but people seem unreasonably afraid of adding them for some reason.
The reason is that they have the wrong mindset when approaching the problem. They think: I get a warning, because the compiler doesn't know that this signed value is always positive, but I know that it is, so I'm going to cast, to tell the compiler that I know what I'm doing. A better reasoning is: I'm converting a signed value to unsigned value, which will not be correct if the signed value is negative. With this in mind, you'll notice that the cast is utterly useless in catching the actual error the compiler is warning you about.
On Mon, 26 Nov 2018 at 08:38, Emil Dotchevski via Boost < boost@lists.boost.org> wrote:
The reason is that they have the wrong mindset when approaching the problem.
They think: I get a warning, because the compiler doesn't know that this signed value is always positive, but I know that it is, so I'm going to cast, to tell the compiler that I know what I'm doing.
A better reasoning is: I'm converting a signed value to unsigned value, which will not be correct if the signed value is negative. With this in mind, you'll notice that the cast is utterly useless in catching the actual error the compiler is warning you about.
+1, the above summarizes the issue quite nicely, I think. degski -- *“If something cannot go on forever, it will stop" - Herbert Stein*
I agree with you, I understand the desire of the industry to institute demonstrable measures of quality in the face of mounting costs associated with certifying code. Or should I say, the increasing size and complexity of code that they want to certify. But the result is often a cast that is placed without good understanding of the context. I'd love to see more emphasis tools like coverity, klockwork, that look for less obvious issues, and languages like Rust that avoid common issues with more rigorous syntax. At least with boost.org we have the benefit of some very good maintainers to act as gatekeepers and avoid the worst of these uninformed revision. On Mon, 19 Nov 2018 at 15:03, Emil Dotchevski via Boost < boost@lists.boost.org> wrote:
On Mon, Nov 19, 2018 at 11:21 AM Brian Kuhl via Boost < boost@lists.boost.org> wrote:
I'd like to confirm the guidance on Warnings I find here https://svn.boost.org/trac10/wiki/Guidelines/WarningsGuidelines is still considered current?
context ...
At Wind River we are in the process of working with Boost 1.68 and
VxWorks
7 (with Dinkum 7.00 with and Clang 6.0 for ARM and IA boards and GCC 8.1 for PowerPC ) with the hope of bundling Boost with our product.
Many of our customers make certified systems ( Planes, Trains, Medical Equipment, Factory Automation, etc. ) and the trend in theses industries is to be pedantic about eliminating all compiler warnings.
In that context there are probably legal reason for zero-warning policy, but it is not true that lack of warnings means fewer errors, in fact it could easily lead to more errors. For example warnings about implicit conversions are frequently "fixed" by replacing the implicit conversion with explicit conversion (cast). But the two are semantically very different, and therefore one of them is incorrect, and very likely that is the explicit one.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On Mon, 19 Nov 2018, Brian Kuhl via Boost wrote:
I'd like to confirm the guidance on Warnings I find here https://svn.boost.org/trac10/wiki/Guidelines/WarningsGuidelines is still considered current?
context ...
At Wind River we are in the process of working with Boost 1.68 and VxWorks 7 (with Dinkum 7.00 with and Clang 6.0 for ARM and IA boards and GCC 8.1 for PowerPC ) with the hope of bundling Boost with our product.
If you are bundling Boost with your product, can't you mandate the use of -isystem to include it, and thus disable most boost warnings at once? -- Marc Glisse
On Mon, Nov 19, 2018 at 12:34 PM Marc Glisse via Boost < boost@lists.boost.org> wrote:
On Mon, 19 Nov 2018, Brian Kuhl via Boost wrote:
I'd like to confirm the guidance on Warnings I find here https://svn.boost.org/trac10/wiki/Guidelines/WarningsGuidelines is still considered current?
context ...
At Wind River we are in the process of working with Boost 1.68 and VxWorks 7 (with Dinkum 7.00 with and Clang 6.0 for ARM and IA boards and GCC 8.1 for PowerPC ) with the hope of bundling Boost with our product.
If you are bundling Boost with your product, can't you mandate the use of -isystem to include it, and thus disable most boost warnings at once?
+1. And there is also #pragma system_header, which I use in my Boost libraries, but people who think that being pedantic about warnings is a good idea don't like that at all.
On 11/19/2018 2:20 PM, Brian Kuhl via Boost wrote:
I'd like to confirm the guidance on Warnings I find here https://svn.boost.org/trac10/wiki/Guidelines/WarningsGuidelines is still considered current?
context ...
At Wind River we are in the process of working with Boost 1.68 and VxWorks 7 (with Dinkum 7.00 with and Clang 6.0 for ARM and IA boards and GCC 8.1 for PowerPC ) with the hope of bundling Boost with our product.
Many of our customers make certified systems ( Planes, Trains, Medical Equipment, Factory Automation, etc. ) and the trend in theses industries is to be pedantic about eliminating all compiler warnings.
While we have not traditionally required zero warnings in open source code shipped with our product, there is pressure on us to move in that direction, and as result we will probably be contributing pull requests specifically to fix or suppress compiler warnings over the coming years.
I'd like to establish clear guidelines for my team as to what is an appropriate "coding standard" for Boost in regards to compiler warnings. While it is simple to say, everything displayed by -Wall, in practice there are many edge cases, e.g. is an unused parameter acceptable in test code? So I'd like to get the maintainers guidance.
Warning free is a nice ideal, but the truth is more complicated. Many warnings are not really about possible coding mishaps but about a sort of lint-like standard which compilers now want to enforce and have nothing to do with correct C++ code. Other warnings are simply erroneous, as is prevalent most everywhere in the current default VC++ preprocessor. Eliminate even the first class of warnings is often a fool's errand AFAIAC. All one ends up doing is adding completely unnecessary code restrictions to what is perfectly acceptable C++ code. A typical example is some variable whose name hides a variable in an outer scope. This is perfectly acceptable C++ code but now many compilers issue a warning when this happens. I object to having to worry about coming up with a unique name to all variables in whatever scope in some translation unit just to avoid such a warning. This process of warnings enforcing coding standards goes on and on and I do not think it is possible, or advisable timewise, to spend unnecessary time pleasing every compiler's notion of what these lint-like warnings entail. I am not against spending time eliminating warnings when the warning itself points top some loose use of C++. But I am against spending time on eliminating warnings when the warning is just some lint-like alarm over what might be an error but is perfectly normal and acceptable C++ code.
Many thanks
Brian Kuhl
On Mon, Nov 19, 2018 at 1:02 PM Edward Diener via Boost < boost@lists.boost.org> wrote:
I do not think it is possible, or advisable timewise, to spend unnecessary time pleasing every compiler's notion of what these lint-like warnings entail.
+1 This can be done for an enumerated list of compiler versions and platforms, but it can NOT be done in general. Another compiler (or the next version of the same compiler) will introduce new warnings. The idea that warnings always indicate potential problems is fundamentally flawed. It follows that always running the compiler at the maximum warning setting is fundamentally flawed, too. Its legitimate use is not as a shield against bugs, but as a debugging tool: if you suspect that a source file contains a subtle bug, you could compile it with maximum linting, examine every warning closely, then turn it back off.
On Tue, 20 Nov 2018 at 01:31, Emil Dotchevski via Boost < boost@lists.boost.org> wrote:
if you suspect that a source file contains a subtle bug, you could compile it with maximum linting, examine every warning closely, then turn it back off.
+1 Or simply just once in a while (during dev) and walk through all those warnings just to verify nothing is actually wrong, it least it [the compiler] tells you where to have a look. It's now much better, but there was a time that running the highest warning level with the VC-STL would turn up mostly warnings in the STL. I also used the ICC for a long time [in the past], high warning levels was simply impossible [with the VC-STL], which gave the impression the whole VC-STL needed a re-write [which it probably did ;-) ]. degski -- *“If something cannot go on forever, it will stop" - Herbert Stein*
On November 19, 2018 4:01:45 PM EST, Edward Diener via Boost
Warning free is a nice ideal, but the truth is more complicated. Many warnings are not really about possible coding mishaps but about a sort of lint-like standard which compilers now want to enforce and have nothing to do with correct C++ code. Other warnings are simply erroneous, as is prevalent most everywhere in the current default VC++ preprocessor. Eliminate even the first class of warnings is often a fool's errand AFAIAC. All one ends up doing is adding completely unnecessary code restrictions to what is perfectly acceptable C++ code.
That's a broad claim not generally borne out in my experience.
A typical example is some variable whose name hides a variable in an outer scope. This is perfectly acceptable C++ code but now many compilers issue a warning when this happens. I object to having to worry about coming up with a unique name to all variables in whatever scope in some translation unit just to avoid such a warning.
That warning is not to enforce some notion of how code should be written. It seeks to highlight a certain class of errors. You may have been certain that the reuse of a name is valid when writing the code, but anyone including you at a later time, may be bitten by a subsequent change to the code. If an intervening declaration is removed, perhaps unintentionally as the result of other editing, later code referring to that formally reused name will suddenly refer to the earlier declaration. The resulting code may compile, but will surely not be what was intended. Another error that warning helps with is when a name is reused and a maintainer sees an earlier, but not a latter declaration of the reused identifier. The code may compile, yet not work as the maintainer intended. You may argue that testing should reveal the resulting problems, but test coverage is rarely so complete as to reliably account for all cases. Renaming the variables will prevent those problems, though you do have to invent unique names.
This process of warnings enforcing coding standards goes on and on and I do not think it is possible, or advisable timewise, to spend unnecessary time pleasing every compiler's notion of what these lint-like warnings entail. I am not against spending time eliminating warnings when the warning itself points top some loose use of C++. But I am against spending time on eliminating warnings when the warning is just some lint-like alarm over what might be an error but is perfectly normal and acceptable C++ code.
Preventing possible errors is a worthwhile goal. Such warnings, at least a subset of them, can be helpful. There are, of course, warnings that border on the ridiculous, but don't ignore the value of the majority. -- Rob (Sent from my portable computation device.)
On 11/21/2018 7:02 PM, Rob Stewart via Boost wrote:
On November 19, 2018 4:01:45 PM EST, Edward Diener via Boost
wrote: Warning free is a nice ideal, but the truth is more complicated. Many warnings are not really about possible coding mishaps but about a sort of lint-like standard which compilers now want to enforce and have nothing to do with correct C++ code. Other warnings are simply erroneous, as is prevalent most everywhere in the current default VC++ preprocessor. Eliminate even the first class of warnings is often a fool's errand AFAIAC. All one ends up doing is adding completely unnecessary code restrictions to what is perfectly acceptable C++ code.
That's a broad claim not generally borne out in my experience.
A typical example is some variable whose name hides a variable in an outer scope. This is perfectly acceptable C++ code but now many compilers issue a warning when this happens. I object to having to worry about coming up with a unique name to all variables in whatever scope in some translation unit just to avoid such a warning.
That warning is not to enforce some notion of how code should be written. It seeks to highlight a certain class of errors. You may have been certain that the reuse of a name is valid when writing the code, but anyone including you at a later time, may be bitten by a subsequent change to the code.
If an intervening declaration is removed, perhaps unintentionally as the result of other editing, later code referring to that formally reused name will suddenly refer to the earlier declaration. The resulting code may compile, but will surely not be what was intended.
Your response is typical of those programmers who seek a language that will keep programmers from making mistakes by restrictions on what the language should allow. That language you seek is not C++ thank god. Whether the restriction is an actual rule of that language or whether the restriction is a warning, which others tell me I must eliminate, makes little difference to me. Try another language like Ada or one of those many safer languages whose "safe" rules make it much more tedious to program than C++.
Another error that warning helps with is when a name is reused and a maintainer sees an earlier, but not a latter declaration of the reused identifier. The code may compile, yet not work as the maintainer intended.
That is a red herring. Any code may compile and not do what was intended.
You may argue that testing should reveal the resulting problems, but test coverage is rarely so complete as to reliably account for all cases. Renaming the variables will prevent those problems, though you do have to invent unique names.
Sorry, programming in a language where every variable in a TU, even the most trivial, has to have some utterly unique name in that TU, and whose names I am expected to all know and keep track of, is not for me. Torture yourself with such absurdities, in the name of safe programming no doubt, but I want none of it.
This process of warnings enforcing coding standards goes on and on and I do not think it is possible, or advisable timewise, to spend unnecessary time pleasing every compiler's notion of what these lint-like warnings entail. I am not against spending time eliminating warnings when the warning itself points top some loose use of C++. But I am against spending time on eliminating warnings when the warning is just some lint-like alarm over what might be an error but is perfectly normal and acceptable C++ code.
Preventing possible errors is a worthwhile goal. Such warnings, at least a subset of them, can be helpful. There are, of course, warnings that border on the ridiculous, but don't ignore the value of the majority.
I agree with you that preventing possible errors is a worthwhile goal. Where we disagree sharply is the tedious elimination of warnings on perfectly correct code as a path to that prevention. It is a huge waste of time, and totally unnecessary to boot, to change code to prevent warnings when the code is correct in doing what it aims to accomplish. Programmers have much better and harder things to do in the design and implementation of their ideas and algorithms than to worry about what any compiler implementation thinks should be a reason for issuing warnings. I am not saying that all warnings should be unheeded. I am saying that trying to fix every warning for every compiler, because some end user insists that he wants to compile with the highest level of warnings and never see any warnings, is not a practical way of writing software.
-- Rob
(Sent from my portable computation device.)
On Wed, Nov 21, 2018 at 5:01 PM Edward Diener via Boost < boost@lists.boost.org> wrote:
I agree with you that preventing possible errors is a worthwhile goal. Where we disagree sharply is the tedious elimination of warnings on perfectly correct code as a path to that prevention. It is a huge waste of time, and totally unnecessary to boot, to change code to prevent warnings when the code is correct in doing what it aims to accomplish.
It's actually worse than that, enforcing warning-free compilation leads to bugs. First, any change in a correct program can introduce a new bug. Secondly, "fixing" a warning often alters the semantics of the operation in a subtle way. Consider for example the use of an explicit cast to silence a warning that an implicit conversion may lead to something bad. You may think that all you've done is silence the warning, but in fact you've changed the program semantically: it no longer does an implicit type conversion, instead it does an explicit one. And the thing is, one of the two is incorrect. Now, if your programmers don't understand how implicit type conversions work in C++, it is possible that enforcing warning-free compilation avoids more bugs than it causes. If that's the case, there's no argument.
On 22/11/2018 14:49, Emil Dotchevski wrote:
First, any change in a correct program can introduce a new bug. Secondly, "fixing" a warning often alters the semantics of the operation in a subtle way. Consider for example the use of an explicit cast to silence a warning that an implicit conversion may lead to something bad. You may think that all you've done is silence the warning, but in fact you've changed the program semantically: it no longer does an implicit type conversion, instead it does an explicit one. And the thing is, one of the two is incorrect.
Now, if your programmers don't understand how implicit type conversions work in C++, it is possible that enforcing warning-free compilation avoids more bugs than it causes. If that's the case, there's no argument.
Perhaps I am one of these programmers who don't understand how implicit type conversions work (although I don't believe so), but... Where implicit conversions are permitted, there is no difference in actual compiled behaviour at the call site between an implicit and explicit conversion. At all. They are utterly identical. There *is* a difference in the warnings, because the compiler will emit warnings "hey, this can do unintended things sometimes, did you really want to do that?" for the implicit case and not for the explicit case, because for the latter it assumes you know what you're doing. There *is* also a difference at the target site (the class implementing the conversion) and its choice of whether it should allow or prevent implicit conversions (and as is typical of C++'s defaults being wrong most of the time, it is usually wrong to allow implicit conversions -- though not always). (And getting that wrong can cause unintended implicit conversions, but those usually present themselves differently from the warnings we're talking about here.) What is also true is that it is a bad idea to get into the habit of seeing such a warning and "I know, I'll fix it with an explicit cast" without looking closely at the code -- because *in some cases* this ends up concealing a genuine bug in the code which the compiler was trying to helpfully warn you about (such as potential for integer overflow or narrowing). Ensuring that code compiles without warnings is very helpful, to make it easier to spot when new warnings crop up and need to be examined. *But* you do have to be disciplined and analyse and test (preferably with input fuzzing) the code where the warnings appear, not just blindly (and sometimes incorrectly) putting in a cast to make the compiler shut up. The compiler doesn't just issue warnings on any implicit conversions, only on those that seem potentially unsafe. So you should not be ignoring these, you should be fixing them. Sometimes that means making an explicit conversion instead. Sometimes that means changing types or verifying input sanity. Sometimes something else. But it shouldn't be left producing warnings when it can be (properly) fixed.
On Wed, Nov 21, 2018 at 7:43 PM Gavin Lambert via Boost < boost@lists.boost.org> wrote:
On 22/11/2018 14:49, Emil Dotchevski wrote:
First, any change in a correct program can introduce a new bug.
"fixing" a warning often alters the semantics of the operation in a subtle way. Consider for example the use of an explicit cast to silence a warning that an implicit conversion may lead to something bad. You may think
Secondly, that
all you've done is silence the warning, but in fact you've changed the program semantically: it no longer does an implicit type conversion, instead it does an explicit one. And the thing is, one of the two is incorrect.
Now, if your programmers don't understand how implicit type conversions work in C++, it is possible that enforcing warning-free compilation avoids more bugs than it causes. If that's the case, there's no argument.
Perhaps I am one of these programmers who don't understand how implicit type conversions work (although I don't believe so), but...
Where implicit conversions are permitted, there is no difference in actual compiled behaviour at the call site between an implicit and explicit conversion. At all. They are utterly identical.
Nope, they're different. Implicit conversions convert to the target type,
while a static_cast converts to what you tell it to convert. Consider:
void f(long);
void f(short);
void f(double);
....
f(x);
Is semantically different from
f( static_cast<short>(x) );
Things get even more interesting if you cast to a typedef:
f( static_cast
On 22/11/2018 19:08, Emil Dotchevski wrote:
On Wed, Nov 21, 2018 at 7:43 PM Gavin Lambert wrote:
Where implicit conversions are permitted, there is no difference in actual compiled behaviour at the call site between an implicit and explicit conversion. At all. They are utterly identical.
Nope, they're different. Implicit conversions convert to the target type, while a static_cast converts to what you tell it to convert. Consider:
void f(long); void f(short); void f(double); ....
f(x);
Is semantically different from
f( static_cast<short>(x) );
Things get even more interesting if you cast to a typedef:
f( static_cast
(x) ); The point is, if casting is correct, then the implicit conversion is not. If the implicit conversion is correct, then casting is not. You should write the correct code even if that leads to a warning. And if you must silence the warning, you should do it without changing the behavior of a correct program.
If you are writing generic code and do not know the actual type of x, perhaps. In that case you are either relying on overload resolution to find a compatible or matching type among an unknown-to-you overload set (in which case you should not "defend" against warnings because it is the responsibility of whatever code gave you x to ensure that there is a sufficiently compatible overload available to resolve the warning). Or you are trying to enforce conversion to a specific type to call a specific overload known to you, in which case you do the cast (although I think function-style casts are preferred now over static_cast). Or some hybrid of the two, where you don't know x but you do know the possible overload set, in which case you should probably be constraining x to be one of those exact types and then let the implicit or explicit cast happen outside of your code. But for non-generic code where you do know the actual type of x and the possible overloads, then you should already know which one it will call and it's your responsibility to make sure that the right overload exists or to perform the cast to avoid a warning. Either one is equivalent.
On November 21, 2018 8:00:47 PM EST, Edward Diener via Boost
On 11/21/2018 7:02 PM, Rob Stewart via Boost wrote:
That warning is not to enforce some notion of how code should be written. It seeks to highlight a certain class of errors. You may have been certain that the reuse of a name is valid when writing the code, but anyone including you at a later time, may be bitten by a subsequent change to the code. If an intervening declaration is removed, perhaps unintentionally as the result of other editing, later code referring to that formally reused name will suddenly refer to the earlier declaration. The resulting code may compile, but will surely not be what was intended.
Your response is typical of those programmers who seek a language that will keep programmers from making mistakes by restrictions on what the language should allow. That language you seek is not C++ thank god.
C++ does that already and we use it to advantage. It's a strongly typed language with const and volatile modifiers, rvalue overloading, etc. We use those tools to control or restrict what can be done with our code. Choosing to code in a way that prevents or reduces the chance of error is de rigeur in C++. If everyone followed your philosophy, we'd use raw pointers to track memory allocations, etc. Who needs RAII to manage resource lifetime if the code is known to be correct, right?
Whether the restriction is an actual rule of that language or whether the restriction is a warning, which others tell me I must eliminate, makes little difference to me. Try another language like Ada or one of those many safer languages whose "safe" rules make it much more
tedious to program than C++.
I've been using C++ for decades quite happily; I don't need another language to satisfy such a trumped-up need. While the language allows many (potentially) unsafe things, I choose to use a safe subset of the language, like so many others.
Another error that warning helps with is when a name is reused and a maintainer sees an earlier, but not a latter declaration of the reused identifier. The code may compile, yet not work as the maintainer intended.
That is a red herring. Any code may compile and not do what was intended.
Inaccurate. When one can simple take steps to prevent potential errors, it's unwise not to do so.
You may argue that testing should reveal the resulting problems, but test coverage is rarely so complete as to reliably account for all cases. Renaming the variables will prevent those problems, though you do have to invent unique names.
Sorry, programming in a language where every variable in a TU, even the most trivial, has to have some utterly unique name in that TU, and whose names I am expected to all know and keep track of, is not for me.
That's a straw man argument. The issue in question only arises when names in an inner scope hide those in an outer scope.
Torture yourself with such absurdities, in the name of safe programming no doubt, but I want none of it.
I don't care for hyperbolic discussions. I've made my point. You're welcome to learn from it or ignore it. I hope others will benefit.
Preventing possible errors is a worthwhile goal. Such warnings, at least a subset of them, can be helpful. There are, of course, warnings that border on the ridiculous, but don't ignore the value of the majority.
I agree with you that preventing possible errors is a worthwhile goal.
Well, at least we agree on that.
Where we disagree sharply is the tedious elimination of warnings on perfectly correct code as a path to that prevention.
The point you've missed is that you generally cannot be certain that a warning is benign without scrutinizing the code in question. I'd prefer to do that just once, so after ensuring the code is correct, I also take steps to eliminate the warning. That allows me to focus on new warnings rather than try to remember which to ignore or to miss new warnings buried amidst so many others.
It is a huge waste of time, and totally unnecessary to boot, to change code to prevent warnings when the code is correct in doing what it aims to accomplish.
I disagree, obviously. -- Rob (Sent from my portable computation device.)
On Wed, Nov 21, 2018 at 4:10 PM Rob Stewart via Boost
On November 19, 2018 4:01:45 PM EST, Edward Diener via Boost < boost@lists.boost.org> wrote:
Warning free is a nice ideal, but the truth is more complicated. Many warnings are not really about possible coding mishaps but about a sort of lint-like standard which compilers now want to enforce and have nothing to do with correct C++ code. Other warnings are simply erroneous, as is prevalent most everywhere in the current default VC++ preprocessor. Eliminate even the first class of warnings is often a fool's errand AFAIAC. All one ends up doing is adding completely unnecessary code restrictions to what is perfectly acceptable C++ code.
That's a broad claim not generally borne out in my experience.
This is a broad claim borne out by my experience. [ Note - I'm a fan of most warnings. Some of them, though, should DIAF ] For example, should the following code cause a warning to be emitted? auto p = std::make_unique<unsigned char>(0); At least one popular compiler warns on this code, complaining about a truncation from int --> unsigned char, and a possible loss of data. We spent a lot of time and effort in Boost.DateTime with this. See https://github.com/boostorg/date_time/pull/50 and others. -- Marshall
-----Original Message----- From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Marshall Clow via Boost Sent: 26 November 2018 14:44 To: boost@lists.boost.org List Cc: Marshall Clow; eldiener@tropicsoft.com Subject: Re: [boost] Current Guidance on Compiler Warnings?
On Wed, Nov 21, 2018 at 4:10 PM Rob Stewart via Boost
wrote: On November 19, 2018 4:01:45 PM EST, Edward Diener via Boost < boost@lists.boost.org> wrote:
Warning free is a nice ideal, but the truth is more complicated. Many warnings are not really about possible coding mishaps but about a sort of lint-like standard which compilers now want to enforce and have nothing to do with correct C++ code. Other warnings are simply erroneous, as is prevalent most everywhere in the current default VC++ preprocessor. Eliminate even the first class of warnings is often a fool's errand AFAIAC. All one ends up doing is adding completely unnecessary code restrictions to what is perfectly acceptable C++ code.
That's a broad claim not generally borne out in my experience.
This is a broad claim borne out by my experience. [ Note - I'm a fan of most warnings. Some of them, though, should DIAF ]
For example, should the following code cause a warning to be emitted?
auto p = std::make_unique<unsigned char>(0);
At least one popular compiler warns on this code, complaining about a truncation from int --> unsigned char, and a possible loss of data.
We spent a lot of time and effort in Boost.DateTime with this. See https://github.com/boostorg/date_time/pull/50 and others.
Indeed - I think I first moaned about it so long, long ago that I had forgotten about it :-( But all this language lawyery is missing what I think it the most important point: *Documenting* that this issue has been considered. If the programmer knows that he (or either sex, if any ***) knows something that the compiler doesn't, may not know, or cannot know, so that bad things cannot happen, he needs to write a comment to record this. *How* the warning is quieted is then less important, but I favour a very local disable~~~. Paul *** - it might even be written by a computer, and I suspect that they are gender-fluid ;-) ~~~ A reason to prefer static_cast might be that some compilers do not permit fine-grained enough disabling of warnings? --- Paul A. Bristow Prizet Farmhouse Kendal UK LA8 8AB +44 (0) 1539 561830
On Mon, Nov 26, 2018 at 6:44 AM Marshall Clow via Boost < boost@lists.boost.org> wrote:
For example, should the following code cause a warning to be emitted?
auto p = std::make_unique<unsigned char>(0);
At least one popular compiler warns on this code, complaining about a truncation from int --> unsigned char, and a possible loss of data.
We spent a lot of time and effort in Boost.DateTime with this. See https://github.com/boostorg/date_time/pull/50 and others.
Exactly. I submit that it is neither possible or desirable to write code that will not result in warnings on some compiler. This is trivially true, because there is no standard for warnings. #pragma system_header is your friend.
On Mon, Nov 19, 2018 at 1:20 PM Brian Kuhl via Boost
I'd like to confirm the guidance on Warnings I find here https://svn.boost.org/trac10/wiki/Guidelines/WarningsGuidelines is still considered current?
context ...
At Wind River we are in the process of working with Boost 1.68 and VxWorks 7 (with Dinkum 7.00 with and Clang 6.0 for ARM and IA boards and GCC 8.1 for PowerPC ) with the hope of bundling Boost with our product.
Many of our customers make certified systems ( Planes, Trains, Medical Equipment, Factory Automation, etc. ) and the trend in theses industries is to be pedantic about eliminating all compiler warnings.
As someone who works with VxWorks at a company that does one of the above (Planes), I'm happy to see Wind River finally getting to this point! One thing that I think the community would find useful, would be if Wind River would stand up some VxWorks-based tests environments that can run our suite of regression tests. This would allow developers to get rapid feedback on the affect of their changes to boost on VxWorks, including if they introduce warnings. It would also be nice/useful to get the Dinkum libraries tested separately from the VxWorks platform. There are instructions for getting it setup here: https://www.boost.org/development/running_regression_tests.html You can see the results for the master/develop branches here: https://www.boost.org/development/tests/master/developer/summary.html https://www.boost.org/development/tests/develop/developer/summary.html I'm assuming that you would be cross-compiling the tests, this might be a bit more involved than what is talked about in the setup guide...having to copy tests to a different platform to run. This has been done in the past by some android developers, although their results aren't in the test matrix anymore. I think they were using this script to run: https://github.com/crystax/android-platform-ndk/blob/master/tests/run-boost-... Good luck! Tom
Am 19.11.2018 um 20:20 schrieb Brian Kuhl via Boost:
Many of our customers make certified systems ( Planes, Trains, Medical Equipment, Factory Automation, etc. ) and the trend in theses industries is to be pedantic about eliminating all compiler warnings.
I run our in-house Boost distribution with the same policy (geared towards Visual Studio). It took me tons of work over several years to get there but during the process of auditing every warning (and dealing with it one way or the other) I actually found errors. Beyond that, the tests have to pass not only in debug build like the test matrix does but also in release build, plus both 32 bit and 64 bit. Each one of the four modes exhibits different sets of warnings and errors in some libraries. Some libraries are poster-childs of careful programming, others - even new ones - less so. That may sound egregious but this is how we develop our software for industrial QA machines. Ciao Dani
Hi Daniela,
On 20. Nov 2018, at 20:14, Daniela Engert via Boost
wrote: Am 19.11.2018 um 20:20 schrieb Brian Kuhl via Boost:
Many of our customers make certified systems ( Planes, Trains, Medical Equipment, Factory Automation, etc. ) and the trend in theses industries is to be pedantic about eliminating all compiler warnings.
I run our in-house Boost distribution with the same policy (geared towards Visual Studio). It took me tons of work over several years to get there but during the process of auditing every warning (and dealing with it one way or the other) I actually found errors. Beyond that, the tests have to pass not only in debug build like the test matrix does but also in release build, plus both 32 bit and 64 bit. Each one of the four modes exhibits different sets of warnings and errors in some libraries. Some libraries are poster-childs of careful programming, others - even new ones - less so.
That may sound egregious but this is how we develop our software for industrial QA machines.
just for my curiosity, are you regularly merging these changes back upstream into Boost or do you maintain a list of patches to apply to each new Boost release? Best regards, Hans
Am 21.11.2018 um 09:48 schrieb Hans Dembinski via Boost: Hi Hans!
just for my curiosity, are you regularly merging these changes back upstream into Boost or do you maintain a list of patches to apply to each new Boost release?
Both. Pushing upstream depends on my impression about how welcome warning-related changes are. Error-related changes are pushed immediately, ofcoz. Ciao Dani
The ideal is always to upstream, I've done a bit of that it past. Other patches carry forward, but we certainly don't "try" to do that. When you bundle open source you become focused on the release you've bundled and rest of the world moves on. We are not like a big Linux distro that grabs every new release. Upstreaming then requires moving your "fix" to the head of the tree, and working with the maintainer to make sure it's portable and generic and follows projects conventions. That takes time and effort, and sometimes get's stalled in the "business" priority queue. I'm speaking here of the primarily of proprietary VxWorks and Boost effort, Wind River also has other products where the focus is different; and virtually the whole product is "open" e.g. https://www.starlingx.io/ and the "value proposition" for customers is different. On Wed, 21 Nov 2018 at 03:47, Hans Dembinski via Boost < boost@lists.boost.org> wrote:
Hi Daniela,
On 20. Nov 2018, at 20:14, Daniela Engert via Boost < boost@lists.boost.org> wrote:
Am 19.11.2018 um 20:20 schrieb Brian Kuhl via Boost:
Many of our customers make certified systems ( Planes, Trains, Medical Equipment, Factory Automation, etc. ) and the trend in theses industries is to be pedantic about eliminating all compiler warnings.
I run our in-house Boost distribution with the same policy (geared towards Visual Studio). It took me tons of work over several years to get there but during the process of auditing every warning (and dealing with it one way or the other) I actually found errors. Beyond that, the tests have to pass not only in debug build like the test matrix does but also in release build, plus both 32 bit and 64 bit. Each one of the four modes exhibits different sets of warnings and errors in some libraries. Some libraries are poster-childs of careful programming, others - even new ones - less so.
That may sound egregious but this is how we develop our software for industrial QA machines.
just for my curiosity, are you regularly merging these changes back upstream into Boost or do you maintain a list of patches to apply to each new Boost release?
Best regards, Hans
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On 11/19/18 11:20 AM, Brian Kuhl via Boost wrote:
I'd like to confirm the guidance on Warnings I find here https://svn.boost.org/trac10/wiki/Guidelines/WarningsGuidelines is still considered current?
context ...
At Wind River we are in the process of working with Boost 1.68 and VxWorks 7 (with Dinkum 7.00 with and Clang 6.0 for ARM and IA boards and GCC 8.1 for PowerPC ) with the hope of bundling Boost with our product.
Many of our customers make certified systems ( Planes, Trains, Medical Equipment, Factory Automation, etc. ) and the trend in theses industries is to be pedantic about eliminating all compiler warnings.
While we have not traditionally required zero warnings in open source code shipped with our product, there is pressure on us to move in that direction, and as result we will probably be contributing pull requests specifically to fix or suppress compiler warnings over the coming years.
I'd like to establish clear guidelines for my team as to what is an appropriate "coding standard" for Boost in regards to compiler warnings. While it is simple to say, everything displayed by -Wall, in practice there are many edge cases, e.g. is an unused parameter acceptable in test code? So I'd like to get the maintainers guidance.
From my perspective, this would be OK. OK - one changes some perfectly fine code to address a warning so we have easily verifiable "clean" code. It's just a little BS to keep everyone happy and it does help find some errors. When one makes code for multiple compilers - it's a whole 'nuther ball game. to make one's code pass warning free in such an environment ends up cluttering the code with lots of ... clutter. Remember we've got libraries support dozens of compilers - counting previous versions. In this environment, it's a fools errand. Robert Ramey
-----Original Message----- From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Robert Ramey via Boost Sent: 22 November 2018 01:11 To: boost@lists.boost.org Cc: Robert Ramey Subject: Re: [boost] Current Guidance on Compiler Warnings?
On 11/19/18 11:20 AM, Brian Kuhl via Boost wrote:
I'd like to confirm the guidance on Warnings I find here https://svn.boost.org/trac10/wiki/Guidelines/WarningsGuidelines is still considered current?
context ...
At Wind River we are in the process of working with Boost 1.68 and VxWorks 7 (with Dinkum 7.00 with and Clang 6.0 for ARM and IA boards and GCC 8.1 for PowerPC ) with the hope of bundling Boost with our product.
Many of our customers make certified systems ( Planes, Trains, Medical Equipment, Factory Automation, etc. ) and the trend in theses industries is to be pedantic about eliminating all compiler warnings.
While we have not traditionally required zero warnings in open source code shipped with our product, there is pressure on us to move in that direction, and as result we will probably be contributing pull requests specifically to fix or suppress compiler warnings over the coming years.
I'd like to establish clear guidelines for my team as to what is an appropriate "coding standard" for Boost in regards to compiler warnings. While it is simple to say, everything displayed by -Wall, in practice there are many edge cases, e.g. is an unused parameter acceptable in test code? So I'd like to get the maintainers guidance.
From my perspective, this would be OK. OK - one changes some perfectly fine code to address a warning so we have easily verifiable "clean" code. It's just a little BS to keep everyone happy and it does help find some errors. When one makes code for multiple compilers - it's a whole 'nuther ball game. to make one's code pass warning free in such an environment ends up cluttering the code with lots of ... clutter. Remember we've got libraries support dozens of compilers - counting previous versions. In this environment, it's a fools errand.
+1 We have multiple compilers, none of them really, really compliant with an imperfectly-specified language, producing different warnings. Trying to please every compiler and configuration all the time proves to be playing Whack-a-mole! But at Boost, we try hard. One of the objectives of gathering the WarningsGuidelines (https://svn.boost.org/trac10/wiki/Guidelines/WarningsGuidelines - update suggestions welcome, or you can edit it yourself) was to emphasize how supressing a warning *documents* that the code has been examined and that the programmer has concluded that the warning is not applicable here. We can and should do better about documenting this as comments in the code. Paul --- Paul A. Bristow Prizet Farmhouse Kendal UK LA8 8AB +44 (0) 1539 561830
Apologies for the top post as I am on a tablet and can’t be arsed with
wrestling the iOS text selection interface. As the author of Beast I strive
to eliminate all warnings no matter how small. Otherwise users will
complain and I will still end up having to fix them anyway.
That said, there were a number of odd compiler version-specific warnings
which have plagued me for over a year. I am pleased to credit Damian Jarek
for getting to the bottom of the worst one, which manifests in gcc 8 and
later as a “possibly uninitialized” variable warning. For months we thought
it was spurious, but he got to the bottom of it and figured it out.
In terms of user satisfaction, it enhances the reputation of a library when
it can be consistently used without producing warnings. Or rather, it
tarnished the reputation of the library when warnings appear. “Fixing all
the warnings” is usually a cheap but effective way to raise the prestige of
a library.
Thanks
On Mon, Nov 19, 2018 at 11:21 AM Brian Kuhl via Boost
I'd like to confirm the guidance on Warnings I find here https://svn.boost.org/trac10/wiki/Guidelines/WarningsGuidelines is still considered current?
context ...
At Wind River we are in the process of working with Boost 1.68 and VxWorks 7 (with Dinkum 7.00 with and Clang 6.0 for ARM and IA boards and GCC 8.1 for PowerPC ) with the hope of bundling Boost with our product.
Many of our customers make certified systems ( Planes, Trains, Medical Equipment, Factory Automation, etc. ) and the trend in theses industries is to be pedantic about eliminating all compiler warnings.
While we have not traditionally required zero warnings in open source code shipped with our product, there is pressure on us to move in that direction, and as result we will probably be contributing pull requests specifically to fix or suppress compiler warnings over the coming years.
I'd like to establish clear guidelines for my team as to what is an appropriate "coding standard" for Boost in regards to compiler warnings. While it is simple to say, everything displayed by -Wall, in practice there are many edge cases, e.g. is an unused parameter acceptable in test code? So I'd like to get the maintainers guidance.
Many thanks
Brian Kuhl
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- Follow me on GitHub: https://github.com/vinniefalco
-----Original Message----- From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Brian Kuhl via Boost Sent: 19 November 2018 19:21 To: boost@lists.boost.org Cc: Brian Kuhl Subject: [boost] Current Guidance on Compiler Warnings?
I'd like to confirm the guidance on Warnings I find here https://svn.boost.org/trac10/wiki/Guidelines/WarningsGuidelines is still considered current?
context ...
At Wind River we are in the process of working with Boost 1.68 and VxWorks 7 (with Dinkum 7.00 with and Clang 6.0 for ARM and IA boards and GCC 8.1 for PowerPC ) with the hope of bundling Boost with our product.
Many of our customers make certified systems ( Planes, Trains, Medical Equipment, Factory Automation, etc. ) and the trend in theses industries is to be pedantic about eliminating all compiler warnings.
While we have not traditionally required zero warnings in open source code shipped with our product, there is pressure on us to move in that direction, and as result we will probably be contributing pull requests specifically to fix or suppress compiler warnings over the coming years.
I'd like to establish clear guidelines for my team as to what is an appropriate "coding standard" for Boost in regards to compiler warnings. While it is simple to say, everything displayed by -Wall, in practice there are many edge cases, e.g. is an unused parameter acceptable in test code? So I'd like to get the maintainers guidance.
This thread has been hijacked by some interesting (if inconclusive) discussions mainly bemoaning some deficiencies of C++. But have we answered your questions Brian? I am sure the we all would like to see Boost bundled with important packagers like Wind River to boost Boost's usage. Is there anything else we should be doing? (Even if we feel that hoops may not be that useful, it may still be worth trying to jump?) Paul --- Paul A. Bristow Prizet Farmhouse Kendal UK LA8 8AB +44 (0) 1539 561830
participants (23)
-
Alexander Grund
-
Andrey Semashev
-
Brian Kuhl
-
Daniela Engert
-
degski
-
Edward Diener
-
Emil Dotchevski
-
Gavin Lambert
-
Hans Dembinski
-
James E. King III
-
Jayesh Badwaik
-
John Maddock
-
Marc Glisse
-
Marshall Clow
-
Paul A. Bristow
-
Peter Dimov
-
Richard Hodges
-
Rob Stewart
-
Robert Ramey
-
Stephan T. Lavavej
-
Steven Watanabe
-
Tom Kent
-
Vinnie Falco