On 2015-11-24 19:29, Domagoj Šarić wrote:
On Tue, 17 Nov 2015 06:24:37 +0530, Andrey Semashev
wrote: Personally, I'm in favor of adding these: BOOST_OVERRIDE, BOOST_FINAL. Although their implementation should be similar to other C++11 macros - they should be based on BOOST_NO_CXX11_FINAL and BOOST_NO_CXX11_OVERRIDE.
I agree, but what if you don't have final but do have sealed (with a less recent MSVC)?
As far as I understand, sealed can be used only with C++/CLR, is that right? If so then I'd rather not add a macro for it. If on the other hand sealed can be used equivalently to final in all contexts, then you could use it to implement BOOST_FINAL.
I would like to have BOOST_ASSUME (implemented without an assert, i.e. equivalent to your BOOST_ASSUME_UNCHECKED), BOOST_UNREACHABLE (again, without an assert, i.e. equivalent to your BOOST_UNREACHABLE_UNCHECKED). The reason for no asserts is that (a) Boost.Config should not depend on Boost.Assert and (b) I fear that having additional expressions before the intrinsic could inhibit the optimization. You can always add *_CHECKED versions of the macros locally, or just use asserts beside the macros.
The additional expressions are assert macros which resolve to nothing in release builds (and thus have no effect on optimisations...checked;)
In release builds asserts are expanded to something like (void)0. Technically, that's nothing, but who knows if it affects optimization.
Dependency on Boost.Assert is technically only there if you use the 'checked' macros...I agree that it is still 'ugly' (and the user would have to separately/explicitly include boost/assert.hpp to avoid a circular dependency) but so is, to me, the idea of having to manually duplicate/prefix all assumes with asserts (since I like all my assumes verified and this would add so much extra verbosity)...
You can add the checked versions to Boost.Assert with a separate PR.
I would have liked BOOST_HAS_CXX_RESTRICT to indicate that the compiler has support for the C99 keyword 'restrict' (or an equivalent) in C++ (the CXX in the macro name emphasizes that the feature is available in C++, not C). The BOOST_RESTRICT macro would be defined to that keyword or empty if there is no support.
Sure I can add the detection macro but for which 'feature set' (already for minimal - only pointers, or only for full - pointers, refs and this)?
That's a good question. I'm leaning towards full support, although that will probably not make MSVC users happy. There is a precedent of BOOST_DEFAULTED_FUNCTION - it expands to C++03 code on gcc 4.5 even though it supports defaulted functions in C++11 mode, but only in public sections.
I don't see much use in BOOST_ATTRIBUTES and related macros - you can achieve the same results with feature specific-macros (e.g. by using BOOST_NORETURN instead of BOOST_ATTRIBUTES(BOOST_DOES_NOT_RETURN)).
Yes, I may change those...I was however 'forward thinking' WRT attributes standardization (so that the BOOST_ATTRIBUTES(BOOST_DOES_NOT_RETURN) macros look like 'one day' [[noreturn]])
That still doesn't improve over BOOST_NORETURN. If there's a reason to, we could even define BOOST_NORETURN to [[noreturn]].
I don't see the benefit of BOOST_NOTHROW_LITE.
It's a nothrow attribute that does not insert runtime checks to call std::terminate...and it is unfortunately not offered by Boost.Config...
Do you have measurments of the possible benefits compared to noexcept? I mean, noexcept was advertised as the more efficient version of throw() already.
Ditto BOOST_HAS_UNION_TYPE_PUNNING_TRICK (doesn't any compiler support this?).
'I'm all with you on this one' but since 'it is not in the standard' language purists will probably complain if it is used unconditionally...
To some extent this is guaranteed by [class.union]/1 in C++11.
(I need this and the *ALIAS* macros for a rewrite/expansion of Boost.Cast, that includes 'bitwise_cast', a sort of generic, safe&optimal reinterpret_cast)...
Again, it looks like this macro would have a rather specialized use.
I don't think BOOST_OVERRIDABLE_SYMBOL is a good idea, given that the same effect can be achieved in pure C++.
You mean creating a class template with a single dummy template argument and a static data member just so that you can define a global variable in a header w/o linker errors?
Slightly better: template< typename T, typename Tag = void > struct singleton { static T instance; }; template< typename T, typename Tag > T singleton< T, Tag >::instance;
Also, some compilers offer this functionality only as a pragma.
You mean in a way that would require a _BEGIN and _END macro pair?
Maybe for some compilers. I meant this: https://docs.oracle.com/cd/E19205-01/819-5267/bkbkr/index.html There's just no point in these compiler-specific workarounds when there's a portable solution.
Calling conventions macros are probably too specialized to functional libraries, I don't think there's much use for these. I would rather not have them in Boost.Config to avoid spreading their use to other Boost libraries.
That's kind of self-contradicting, if there is a 'danger' of them being used in other libraries that would imply there is a 'danger' from them being useful...
What I mean is that having these macros in Boost.Config might encourage people to use them where they would normally not.
In any case, I agree that most of those would mostly be used only in functional libraries but for HPC and math libraries especially, the *REG*/'fastcall' conventions are useful when they cannot (e.g. on ABI boundaries) or do not want to rely on the compiler (IPO, LTCG etc.) to automatically choose the optimal/custom calling convention...Admittedly this is mostly useful on targets with 'bad default' conventions, like 32bit x86 and MS x64, but these are still widely used ABIs :/
Non-standard calling conventions give enough headache for users to avoid them as much as possible. You might use them in library internals but there I think it's better to avoid the call at all - by forcing the hot code inline.
Function optimization macros are probably too compiler and case-specific. Your choice of what is considered fast, small code, acceptable math optimizations may not fit others.
If the indisputable goal (definition of 'good codegen') is to have fast and small code/binaries then 'I have to disagree'. For example a library dev can certainly know that certain code will never be part of a hot block (assuming correct usage of the library), for example initialisation, cleanup or error/failure related code and should thus be optimised for size (because that is actually optimising for real world speed - reducing unnecessary bloat - IO and CPU cache thrashing).
If that code is unimportant then why do you care? Simply organizing code into functions properly and using BOOST_LIKELY/UNLIKELY where needed will do the thing.
Also, things like these should have a very limited use, as the user has to have the ultimate control over the build options.
I'm 'more than all for ultimate control' - as explained above this can actually give more control to the user (+ Boost Build was already a major pain when it came to user control over changing compiler optimisation options in preexisting variants)...
What I was saying is that it's the user who has to decide whether to build your code for size or for speed or for debug. That includes the parts of the code that you, the library author, consider performance critical or otherwise. You may want to restrict his range of choices, e.g. when a certain optimization breaks your code. I guess, you could try to spell these restrictions with these macros, but frankly I doubt it's worth the effort. I mean, there are so many possibilities on different compilers. One legitimate reason to use these macros that comes to mind is changing the target instruction set for a set of functions that require that (e.g. when a function is optimized for AVX in an application that is supposed to run in the absence of this extension). But then this only seems necessary with gcc, which again makes it a rather specific workaround.