BOOST_PP array extension proposal
Hello to all of you! A few years ago, I've developped a small extension to BOOST_PP that adds a few functions to it. Namely: * ARRAY_CAT, which concatenates two BOOST_PP arrays; * ARRAY_LOWER_BOUND, which finds the index of the lower bound of an item in a given array of numerical values; * ARRAY_SORT, which sorts the given array of numerical values; * ARRAY_SORT_U, which sorts the given array and remove duplicates. I'm not sure it'd be worth adding to BOOST_PP; but if you think it might be, I'd be happy to submit a merge request. The code is rather small and can be browsed at https://github.com/nicuveo/TOOLS_PP Any suggestion or input is welcome. Thanks for your time! -- Antoine Leblanc
On 9/10/2015 1:35 PM, Antoine Leblanc wrote:
Hello to all of you!
A few years ago, I've developped a small extension to BOOST_PP that adds a few functions to it. Namely: * ARRAY_CAT, which concatenates two BOOST_PP arrays; * ARRAY_LOWER_BOUND, which finds the index of the lower bound of an item in a given array of numerical values; * ARRAY_SORT, which sorts the given array of numerical values; * ARRAY_SORT_U, which sorts the given array and remove duplicates.
I'm not sure it'd be worth adding to BOOST_PP; but if you think it might be, I'd be happy to submit a merge request. The code is rather small and can be browsed at https://github.com/nicuveo/TOOLS_PP
Any suggestion or input is welcome. Thanks for your time!
Given that variadic macros are supported for just about all compilers now, the Boost PP array is largely obsolete as the Boost PP tuple has all the functionality which an array has except for the fact that a tuple can never hold 0 elements while an array can. Therefore I am trying to phase out the use of Boost PP arrays in favor of Boost PP tuples. If you change your functionality to work with tuples instead when variadic macros are being used, and create a PR for it against the Boost PP 'develop' branch, I would be happy to look at it and merge it into Boost PP.
On 10 September 2015 at 22:11, Edward Diener
If you change your functionality to work with tuples instead when variadic macros are being used, and create a PR for it against the Boost PP 'develop' branch, I would be happy to look at it and merge it into Boost PP.
I could maybe include both, and keep the tuple version behind a #ifdef BOOST_PP_VARIADIC? -- Antoine
On 9/10/2015 5:38 PM, Antoine Leblanc wrote:
On 10 September 2015 at 22:11, Edward Diener
wrote: If you change your functionality to work with tuples instead when variadic macros are being used, and create a PR for it against the Boost PP 'develop' branch, I would be happy to look at it and merge it into Boost PP.
I could maybe include both, and keep the tuple version behind a #ifdef BOOST_PP_VARIADIC?
That's fine. It's BOOST_PP_VARIADICS BTW.
On 11 September 2015 at 09:46, Edward Diener
On 9/10/2015 5:38 PM, Antoine Leblanc wrote:
I could maybe include both, and keep the tuple version behind a #ifdef BOOST_PP_VARIADIC?
That's fine. It's BOOST_PP_VARIADICS BTW.
Okay, it is done, I ported the array code to tuples. (I had to keep array at some point, though: my SORT works by inserting elements one by one in an buffer, which is therefore an empty container at the start of the loop. Since I can't start the loop with an empty tuple, BOOST_PP_TUPLE_SORT has to use an array buffer and convert it back to a tuple afterwards. I thought about using a non-empty tuple containing 0 as the buffer and dropping the first element afterwards, but this wouldn't work with SORT_U...) Newbie questions time, sorry: I can't seem to find information on how to submit my patch? Also, I'm not sure I understand how BOOST_PP is tested: I'm currently using a small script that tries the macros with both g++ and clang++, but I'm not sure how to integrate it to the existing test suites. Thanks again for your time! -- Antoine
On 11 September 2015 at 11:24, Antoine Leblanc
Newbie questions time, sorry: I can't seem to find information on how to submit my patch? Also, I'm not sure I understand how BOOST_PP is tested: I'm currently using a small script that tries the macros with both g++ and clang++, but I'm not sure how to integrate it to the existing test suites.
I tried creating a ticket (https://svn.boost.org/trac/boost/ticket/11644) and attaching the patch file to it, but my patch file gets flagged as spam and rejected due to URLs in file headers. Sorry for bothering you with trivial questions. -- Antoine
On 9/11/2015 8:28 AM, Antoine Leblanc wrote:
On 11 September 2015 at 11:24, Antoine Leblanc
wrote: Newbie questions time, sorry: I can't seem to find information on how to submit my patch? Also, I'm not sure I understand how BOOST_PP is tested: I'm currently using a small script that tries the macros with both g++ and clang++, but I'm not sure how to integrate it to the existing test suites.
I tried creating a ticket (https://svn.boost.org/trac/boost/ticket/11644) and attaching the patch file to it, but my patch file gets flagged as spam and rejected due to URLs in file headers.
URLs in the files of the patch ? You don't need that. If you need to mention URLs add that to a comment for the Boost ticket.
Sorry for bothering you with trivial questions.
On 11 September 2015 at 14:00, Edward Diener
URLs in the files of the patch ? You don't need that. If you need to mention URLs add that to a comment for the Boost ticket.
The header in the files I added contains the link to the license, it might be what triggered the spam checker? Anyway, I'll do a github pull request instead. Thanks for your answers! -- Antoine
On 9/11/2015 6:24 AM, Antoine Leblanc wrote:
On 11 September 2015 at 09:46, Edward Diener
wrote: On 9/10/2015 5:38 PM, Antoine Leblanc wrote:
I could maybe include both, and keep the tuple version behind a #ifdef BOOST_PP_VARIADIC?
That's fine. It's BOOST_PP_VARIADICS BTW.
Okay, it is done, I ported the array code to tuples.
(I had to keep array at some point, though: my SORT works by inserting elements one by one in an buffer, which is therefore an empty container at the start of the loop. Since I can't start the loop with an empty tuple, BOOST_PP_TUPLE_SORT has to use an array buffer and convert it back to a tuple afterwards. I thought about using a non-empty tuple containing 0 as the buffer and dropping the first element afterwards, but this wouldn't work with SORT_U...)
Newbie questions time, sorry: I can't seem to find information on how to submit my patch?
See https://svn.boost.org/trac/boost/wiki/StartModPatchAndPullReq.
Also, I'm not sure I understand how BOOST_PP is tested: I'm currently using a small script that tries the macros with both g++ and clang++, but I'm not sure how to integrate it to the existing test suites.
It uses a Jamfile.v2 in the test sub-directory and tests in *.cxx files in the test sub-directory. For arrays look at array.cxx and for tuples look at tuple.cxx. Ideally every Boost PP macro should have its won .cxx file but I never got around to implementing that.
Thanks again for your time!
On 11 September 2015 at 13:57, Edward Diener
See https://svn.boost.org/trac/boost/wiki/StartModPatchAndPullReq.
There it is: https://github.com/boostorg/preprocessor/pull/7 Thanks again for your help! -- Antoine
On 9/11/2015 10:07 AM, Antoine Leblanc wrote:
On 11 September 2015 at 13:57, Edward Diener
wrote: See https://svn.boost.org/trac/boost/wiki/StartModPatchAndPullReq.
There it is: https://github.com/boostorg/preprocessor/pull/7
Thanks again for your help!
I will look at it. Thanks !
On 11 September 2015 at 13:57, Edward Diener
It uses a Jamfile.v2 in the test sub-directory and tests in *.cxx files in the test sub-directory. For arrays look at array.cxx and for tuples look at tuple.cxx. Ideally every Boost PP macro should have its won .cxx file but I never got around to implementing that.
A question about that. With the way those tests are made, one can only compare numerical values at compile time, which means all array tests are either a call to ARRAY_ELEM, ARRAY_SIZE or ARRAY_IS_EMPTY. I'd like to find a way to write tests that compare entire arrays / tuples. A naive way to do that would be to STRINGIZE the arrays and compare the resulting strings at runtime. However, it seems to me that doing runtime checks would be a big change to the way BOOST_PP is currently tested, and I'd like your opinion on that subject before doing anything. -- Antoine
On 9/11/2015 4:21 PM, Antoine Leblanc wrote:
On 11 September 2015 at 13:57, Edward Diener
wrote: It uses a Jamfile.v2 in the test sub-directory and tests in *.cxx files in the test sub-directory. For arrays look at array.cxx and for tuples look at tuple.cxx. Ideally every Boost PP macro should have its won .cxx file but I never got around to implementing that.
A question about that. With the way those tests are made, one can only compare numerical values at compile time, which means all array tests are either a call to ARRAY_ELEM, ARRAY_SIZE or ARRAY_IS_EMPTY. I'd like to find a way to write tests that compare entire arrays / tuples. A naive way to do that would be to STRINGIZE the arrays and compare the resulting strings at runtime.
However, it seems to me that doing runtime checks would be a big change to the way BOOST_PP is currently tested, and I'd like your opinion on that subject before doing anything.
If you were using VMD you could use BOOST_VMD_EQUAL ( that's a shameless plug for my library ) but since we are in Boost PP land you could create your own macro for testing purposes which compares each array element's numeric value to see if the arrays are equal or not, returning 1 if they are not and 0 if they are. Then we are back the way the tests are done comparing numeric values at compile time.
On 11 September 2015 at 23:49, Edward Diener
If you were using VMD you could use BOOST_VMD_EQUAL ( that's a shameless plug for my library )
I'll have a look at it!
but since we are in Boost PP land you could create your own macro for testing purposes which compares each array element's numeric value to see if the arrays are equal or not, returning 1 if they are not and 0 if they are. Then we are back the way the tests are done comparing numeric values at compile time.
That'd work indeed. Meanwhile I've tried the compile-time string comparison approach, inspired by Scott Schurr's str_const (http://stackoverflow.com/a/15863826). It already works, but it's not easily readable... I'll add it to my pull request, you'll tell me whether you deem it acceptable or not. -- Antoine
On 11 September 2015 at 23:49, Edward Diener
but since we are in Boost PP land you could create your own macro for testing purposes which compares each array element's numeric value to see if the arrays are equal or not, returning 1 if they are not and 0 if they are.
In the end, that's what I've done. Except that instead of restricting those to test purposes, I added them to the library, which now also features BOOST_PP_ARRAY_EQUAL and BOOST_PP_TUPLE_EQUAL. -- Antoine
On 9/14/2015 10:02 AM, Antoine Leblanc wrote:
On 11 September 2015 at 23:49, Edward Diener
wrote: but since we are in Boost PP land you could create your own macro for testing purposes which compares each array element's numeric value to see if the arrays are equal or not, returning 1 if they are not and 0 if they are.
In the end, that's what I've done. Except that instead of restricting those to test purposes, I added them to the library, which now also features BOOST_PP_ARRAY_EQUAL and BOOST_PP_TUPLE_EQUAL.
Yes, I saw that.
On Thu, Sep 10, 2015 at 2:11 PM, Edward Diener
Given that variadic macros are supported for just about all compilers now, the Boost PP array is largely obsolete as the Boost PP tuple has all the functionality which an array has except for the fact that a tuple can never hold 0 elements while an array can. Therefore I am trying to phase out the use of Boost PP arrays in favor of Boost PP tuples.
This might just be just paranoia, but has there ever been profiling done regarding large usage of PP tuple-based code vs PP array-based code? I've experienced some pretty drastic differences in memory usage during compilation when switching between different looping constructs and container types, etc. and it's not always immediately obvious why. There is certainly at least some minimal cost to always using tuples, IIUC. Particularly if these operations are performed inside of repetition constructs, though, especially with a compiler that does sophisticated macro expansion tracking, seemingly minimal differences might actually become noticeable. I've never done the benchmark myself, but I just think it might be best to proceed with caution before encouraging switching over whole-sale to tuples. I wonder if, instead, we might also consider the opposite approach. By that I mean consider using arrays and phasing out direct tuple operations, except for the ability to create a PP array from a PP tuple, automatically deducing the size. As you mentioned, arrays can properly represent an empty range, so that alone might be a good reason to prefer it as the go-to tuple-like container, even with all else being equal. I haven't given a lot of thought to this, though, but I'm sure Paul would also have some pretty good input on the matter. Do you know his stance on this? -Matt Calabrese
On 9/10/2015 5:44 PM, Matt Calabrese wrote:
On Thu, Sep 10, 2015 at 2:11 PM, Edward Diener
wrote: Given that variadic macros are supported for just about all compilers now, the Boost PP array is largely obsolete as the Boost PP tuple has all the functionality which an array has except for the fact that a tuple can never hold 0 elements while an array can. Therefore I am trying to phase out the use of Boost PP arrays in favor of Boost PP tuples.
This might just be just paranoia, but has there ever been profiling done regarding large usage of PP tuple-based code vs PP array-based code? I've experienced some pretty drastic differences in memory usage during compilation when switching between different looping constructs and container types, etc. and it's not always immediately obvious why. There is certainly at least some minimal cost to always using tuples, IIUC. Particularly if these operations are performed inside of repetition constructs, though, especially with a compiler that does sophisticated macro expansion tracking, seemingly minimal differences might actually become noticeable. I've never done the benchmark myself, but I just think it might be best to proceed with caution before encouraging switching over whole-sale to tuples.
Ouch <g> ! I admit I have never done any benchmarking of preprocessor code. This is not only compile time code but is preprocessor code, which occurs pretty earlier in the compilation phases. So I have never thought very hard and long about how one would measure the time spent by the compiler in macro expansion depending on whether you use one Boost PP construct versus another. Any thoughts about how anybody could accurately benchmark such time would be most welcome. The main reason for preferring a PP tuple to a PP array when using variadic macros is that you don't have to specify the number of elements in the tuple as you do in the array, so the tuple syntax is easier to work with. Call this my own sense of syntactical elegance. It also eliminates the mistake where you might specify the wrong array size for a PP array. This mistake could more easily occur when you have nested parenthesis as array elements. I have no doubt manipulating tuples are probably slower than manipulating arrays when variadic macros are being used, since calculating the tuple size is slower than having it there as a preprocessor number. I still vote for elegance over preprocessor speed, but I understand your point of view.
I wonder if, instead, we might also consider the opposite approach. By that I mean consider using arrays and phasing out direct tuple operations, except for the ability to create a PP array from a PP tuple, automatically deducing the size. As you mentioned, arrays can properly represent an empty range, so that alone might be a good reason to prefer it as the go-to tuple-like container, even with all else being equal.
I should have specified that "phase out the use of Boost PP arrays" does not mean that they will ever be eliminated from Boost PP AFAICS. VMD makes a strong case for general emptiness when variadic macros are being used. For the two composite constructs, tuples and seqs, which can not have zero elements, I should have added to Boost PP's functionality with tuples and seqs my own BOOST_VMD_xxx functionality which allows the macro programmer to work with them starting from or ending with an "empty" state. I can certainly do that.
I haven't given a lot of thought to this, though, but I'm sure Paul would also have some pretty good input on the matter. Do you know his stance on this?
I am pretty sure I know Paul's stance since he was the one who mentioned to me that with the use of variadic macros the Boost PP array is "obsolete".
On Thu, Sep 10, 2015 at 4:15 PM, Edward Diener
I admit I have never done any benchmarking of preprocessor code. This is not only compile time code but is preprocessor code, which occurs pretty earlier in the compilation phases. So I have never thought very hard and long about how one would measure the time spent by the compiler in macro expansion depending on whether you use one Boost PP construct versus another. Any thoughts about how anybody could accurately benchmark such time would be most welcome.
My experiences are anecdotal, so I don't want to make precise claims, I'm just raising this as something to consider and it might be necessary to benchmark before making too many recommendations. When I was working on Boost.Generic I at one point reached a blocking point where preprocessing was consuming so much memory that I'd run out of address space (32-bit)! I just couldn't proceed when dealing with complicated concepts until I revised how I did my repetition, which brought down the memory usage considerably (switching between several disparate fold operations with small states and a single fold operation with a large state is one change that I remember vividly). I imagine that looping constructs are always more directly the culprit for these types of issues, though if you are deep inside of some repetition I wonder if even the difference of tuple and array can have noticeable impact, especially for a large number of elements. I really don't know as I've never really analyzed the problem or done rigorous profiling of these types of things, but I've stopped making too many assumptions as I've been bitten before. It also could be pretty compiler-dependent as well.
I have no doubt manipulating tuples are probably slower than manipulating arrays when variadic macros are being used, since calculating the tuple size is slower than having it there as a preprocessor number.
To be clear, I'm not strictly sure about even that even though my initial intuition was to prefer array, I'm just hesitant to state for sure that recommending a preference of tuple is necessarily the best recommendation or the best default. Some operations are probably simpler for tuples (I.E. I'm imagining that joining two tuples together is probably faster than joining two arrays, since you can just expand both by way of variadics without caring about or having to calculate the size of the result). There could even be no or minimal measurable difference in all practical cases. I've just in practice seen surprising behavior of implementations during preprocessing that I wouldn't have expect had I not seen it happen. Memory usage in particular tends to be surprisingly intense during preprocessing (surprising to me, at least), especially if tracking of macro expansions is enabled in your compiler. I should have specified that "phase out the use of Boost PP arrays" does
not mean that they will ever be eliminated from Boost PP AFAICS.
Okay, that's good, then. I am pretty sure I know Paul's stance since he was the one who mentioned to
me that with the use of variadic macros the Boost PP array is "obsolete".
I usually assume whatever Paul suggests is best when in this domain, so my paranoia could be unfounded here. -- -Matt Calabrese
On 9/10/2015 7:58 PM, Matt Calabrese wrote:
On Thu, Sep 10, 2015 at 4:15 PM, Edward Diener
wrote: I admit I have never done any benchmarking of preprocessor code. This is not only compile time code but is preprocessor code, which occurs pretty earlier in the compilation phases. So I have never thought very hard and long about how one would measure the time spent by the compiler in macro expansion depending on whether you use one Boost PP construct versus another. Any thoughts about how anybody could accurately benchmark such time would be most welcome.
My experiences are anecdotal, so I don't want to make precise claims, I'm just raising this as something to consider and it might be necessary to benchmark before making too many recommendations. When I was working on Boost.Generic I at one point reached a blocking point where preprocessing was consuming so much memory that I'd run out of address space (32-bit)!
I encountered this with gcc quite often when testing VMD until I specified '-ftrack-macro-expansion=0' and that solved the problem for gcc. I have also encountered an inner compiler error from clang on one of my VMD tests, most probably due to some clang limit being exceeded; I reported it to clang but so far no resolution has occurred. I also ran into situations where VC++ would give errors only to have everything work without errors when I reran the VMD tests; that sounds like some out of memory error.
I just couldn't proceed when dealing with complicated concepts until I revised how I did my repetition, which brought down the memory usage considerably (switching between several disparate fold operations with small states and a single fold operation with a large state is one change that I remember vividly). I imagine that looping constructs are always more directly the culprit for these types of issues, though if you are deep inside of some repetition I wonder if even the difference of tuple and array can have noticeable impact, especially for a large number of elements. I really don't know as I've never really analyzed the problem or done rigorous profiling of these types of things, but I've stopped making too many assumptions as I've been bitten before. It also could be pretty compiler-dependent as well.
I have no doubt manipulating tuples are probably slower than manipulating arrays when variadic macros are being used, since calculating the tuple size is slower than having it there as a preprocessor number.
To be clear, I'm not strictly sure about even that even though my initial intuition was to prefer array, I'm just hesitant to state for sure that recommending a preference of tuple is necessarily the best recommendation or the best default. Some operations are probably simpler for tuples (I.E. I'm imagining that joining two tuples together is probably faster than joining two arrays, since you can just expand both by way of variadics without caring about or having to calculate the size of the result). There could even be no or minimal measurable difference in all practical cases.
Agreed.
I've just in practice seen surprising behavior of implementations during preprocessing that I wouldn't have expect had I not seen it happen. Memory usage in particular tends to be surprisingly intense during preprocessing (surprising to me, at least), especially if tracking of macro expansions is enabled in your compiler.
See the gcc note above.
I should have specified that "phase out the use of Boost PP arrays" does
not mean that they will ever be eliminated from Boost PP AFAICS.
Okay, that's good, then.
I am pretty sure I know Paul's stance since he was the one who mentioned to
me that with the use of variadic macros the Boost PP array is "obsolete".
I usually assume whatever Paul suggests is best when in this domain, so my paranoia could be unfounded here.
My recommendation is simply based on ease of use and syntactic simplicity. Without variadic macros PP arrays are mechanisms which track the number of elements whereas with PP tuples the number of elements has to be either hardcoded for a particular use ( is known and never changes ) or passed separately. Once variadic macros are supported the size of the PP tuple is always known and therefore much easier to use, but of course the size can not be 0 elements as it can with a PP array. In VMD you can pass "emptiness" as a tuple of 0 elements if you like, but of course you need to check for emptiness and act accordingly. As I said I think I can improve the latter.
On Thu, Sep 10, 2015 at 5:57 PM, Edward Diener
I encountered this with gcc quite often when testing VMD until I specified '-ftrack-macro-expansion=0' and that solved the problem for gcc.
Yeah, I have to do that now with GCC, too, but FWIW the problems I was having were even before GCC added macro expansion tracking. I'm not sure what GCC does that's so memory intensive during preprocessing. -- -Matt Calabrese
On 9/10/2015 6:59 PM, Matt Calabrese wrote:
Yeah, I have to do that now with GCC, too, but FWIW the problems I was having were even before GCC added macro expansion tracking. I'm not sure what GCC does that's so memory intensive during preprocessing.
To some degree it depends on whether the preprocessor is doing what it
is supposed to do at all or just something that kinda-sorta results in
the same thing. One of the biggest performance hits that I have seen in
those preprocessors that do it right (or at least mostly) has to do with
what happens with actual arguments to macros.
Consider what happens here:
#define A 1 B
#define B 2 C
#define C 3 D
#define D 4
0 A 5
This *looks* like four nested "calls," but it isn't. Rather, this is a
stream. Ignoring the whole disabling context/blue paint for the moment,
the replacement process proceeds as follows:
0 A 5
^
0 1 B 5
^
0 1 B 5
^
0 1 2 C 5
^
0 1 2 C 5
^
0 1 2 3 D 5
^
0 1 2 3 D 5
^
0 1 2 3 4 5
^
0 1 2 3 4 5
^
0 1 2 3 4 5
^
In this scenario, once the scan position has advanced past a
preprocessing token, that token is "done". I.e. it is pure output (to
standard output, to a file, to the parser, or whatever).
Now change the scenario slightly:
#define M(x) x
M(0 A 5)
In this case, the argument is scanned for macro replacement as an
independent scan. That whole scan takes place as above, the result of
which is substituted for 'x' in the replacement list, the invocation of
'M' is replaced by that result, the scan position is placed at the first
token of that result, and top-level scanning resumes.
The overall difference here is that the presence of the argument (where
the argument is used in the replacement list as a non # or ## operand)
causes what amounts to a recursive call to the entire scanning for macro
replacement process. The results of which must be placed in a buffer,
not output, in order to perform the substitution.
So, A defined as B defined as C (and so on) is not recursive, but
M(M(M())) usually *is* recursive.
As an immediate testable example, Chaos has a macro that is used to
count scans for macro replacement:
#include
On 9/11/2015 7:04 AM, Paul Mensonides wrote:
On 9/10/2015 6:59 PM, Matt Calabrese wrote:
Yeah, I have to do that now with GCC, too, but FWIW the problems I was having were even before GCC added macro expansion tracking. I'm not sure what GCC does that's so memory intensive during preprocessing.
To some degree it depends on whether the preprocessor is doing what it is supposed to do at all or just something that kinda-sorta results in the same thing. One of the biggest performance hits that I have seen in those preprocessors that do it right (or at least mostly) has to do with what happens with actual arguments to macros.
Consider what happens here:
#define A 1 B #define B 2 C #define C 3 D #define D 4
0 A 5
This *looks* like four nested "calls," but it isn't. Rather, this is a stream. Ignoring the whole disabling context/blue paint for the moment, the replacement process proceeds as follows:
0 A 5 ^ 0 1 B 5 ^ 0 1 B 5 ^ 0 1 2 C 5 ^ 0 1 2 C 5 ^ 0 1 2 3 D 5 ^ 0 1 2 3 D 5 ^ 0 1 2 3 4 5 ^ 0 1 2 3 4 5 ^ 0 1 2 3 4 5 ^
In this scenario, once the scan position has advanced past a preprocessing token, that token is "done". I.e. it is pure output (to standard output, to a file, to the parser, or whatever).
Now change the scenario slightly:
#define M(x) x
M(0 A 5)
In this case, the argument is scanned for macro replacement as an independent scan. That whole scan takes place as above, the result of which is substituted for 'x' in the replacement list, the invocation of 'M' is replaced by that result, the scan position is placed at the first token of that result, and top-level scanning resumes.
The overall difference here is that the presence of the argument (where the argument is used in the replacement list as a non # or ## operand) causes what amounts to a recursive call to the entire scanning for macro replacement process. The results of which must be placed in a buffer, not output, in order to perform the substitution.
So, A defined as B defined as C (and so on) is not recursive, but M(M(M())) usually *is* recursive.
As an immediate testable example, Chaos has a macro that is used to count scans for macro replacement:
#include
#define A(x) B(x) #define B(x) C(x) #define C(x) D(x) #define D(x) E(x) #define E(x) x
CHAOS_PP_HALT( A( CHAOS_PP_DELVE() ) )
The result here is 6 (on entry to A, on entry to B, on entry to C, on entry to D, on entry to E, and finally when top level scanning is resumed).
Change the above to:
#include
#define A(x) B(, x) #define B(p, x) C(, p##x) #define C(p, x) D(, p##x) #define D(p, x) E(, p##x) #define E(p, x) p##x
CHAOS_PP_HALT( A( CHAOS_PP_DELVE() ) )
Now you get 2 (on entry to A and when top level scanning is resumed). This occurs because the placemarker concatenation is stopping the argument from being scanned for macro replacement all the way through most of the structure. (When this type of thing is scaled up, it can be *massively* faster than not doing it.)
None of these scenarios here are actually too bad because the recursion depth is shallow (and by "recursion depth" I mean, of course, the recursive scan depth) so you aren't getting buffers upon buffers, etc.. However, a library built with performance in mind from the ground up (i.e. not Boost.Preprocessor and not Chaos in many cases), has to be built with the above type of stuff in mind.
I recall your explaining previously how using the concatenation construct ('p##x'), as a means of shortcutting rescanning, when designing a preprocessing library increases preprocessor efficiency. I still think clarity of code, even library internals, is often more important than saving CPU cycles but everyone has their own priorities.
With regard to tuples versus arrays... The answer is really "neither". Both are bad choices for a data structure. Usually they are used for passing multiple auxiliary arguments through some higher-order algorithm. The right solution there is to simply never pack them together in that fashion at all:
#define M(s, n, a, b, c) (a + b + c)
CHAOS_PP_EXPR(CHAOS_PP_REPEAT( 5, M, 1, 2, 3 ))
(i.e. no TUPLE_ELEM in sight)
You can only get random access to tuple elements for a small number of elements. So for any largish container (i.e. where performance matters more), algorithmic processing is usually more efficient with a sequence. In a library like Chaos, it also makes much more efficient use of the available recursion steps because things like folds can be unrolled (because the data structure itself provides the computational horsepower to process itself--even if the algorithm is n^2 or worse).
I am recommending seqs from now on for any fairly large composite.
But, given a choice between array and tuple as a container (rather than just a parameter packing mechanism), I would choose tuples. Otherwise, any algorithmic processing of the data structure is going to require arithmetic to maintain the size of the array. (If you statically know the size of the array, then you aren't using it as a container, but rather just a packing structure.)
Good to know. The main objection to tuple rather than array is that a tuple can't have 0 elements. I plan to alleviate that in VMD, which supports "emptiness", by allowing tuples and seqs to start or end as "emptiness". Of course testing for "emptiness" using variadic macros is very slightly flawed as you know, since you wrote that code, but since VMD supports "emptiness" I will add it to that library and then those who want to use tuples or seqs as starting ( or ending with ) 0 elements can do so.
On 9/11/2015 6:37 AM, Edward Diener wrote:
The main objection to tuple rather than array is that a tuple can't have 0 elements. I plan to alleviate that in VMD, which supports "emptiness", by allowing tuples and seqs to start or end as "emptiness". Of course testing for "emptiness" using variadic macros is very slightly flawed as you know, since you wrote that code, but since VMD supports "emptiness" I will add it to that library and then those who want to use tuples or seqs as starting ( or ending with ) 0 elements can do so.
Wait... what would be considered an empty tuple or sequence? Regards, Paul Mensonides
On 9/11/2015 4:58 PM, Paul Mensonides wrote:
On 9/11/2015 6:37 AM, Edward Diener wrote:
The main objection to tuple rather than array is that a tuple can't have 0 elements. I plan to alleviate that in VMD, which supports "emptiness", by allowing tuples and seqs to start or end as "emptiness". Of course testing for "emptiness" using variadic macros is very slightly flawed as you know, since you wrote that code, but since VMD supports "emptiness" I will add it to that library and then those who want to use tuples or seqs as starting ( or ending with ) 0 elements can do so.
Wait... what would be considered an empty tuple or sequence?
"Emptiness" or no preprocessor token.
On 9/11/2015 3:40 PM, Edward Diener wrote:
On 9/11/2015 4:58 PM, Paul Mensonides wrote:
On 9/11/2015 6:37 AM, Edward Diener wrote:
Wait... what would be considered an empty tuple or sequence?
"Emptiness" or no preprocessor token.
So: size 0: size 1: () size 2: (,) size 3: (,,) ? Regards, Paul Mensonides
On 9/11/2015 6:46 PM, Paul Mensonides wrote:
On 9/11/2015 3:40 PM, Edward Diener wrote:
On 9/11/2015 4:58 PM, Paul Mensonides wrote:
On 9/11/2015 6:37 AM, Edward Diener wrote:
Wait... what would be considered an empty tuple or sequence?
"Emptiness" or no preprocessor token.
So:
size 0: size 1: () size 2: (,) size 3: (,,)
?
Yes. Also I will rewrite a number of BOOST_PP_TUPLE_XXX macros as BOOST_VMD_TUPLE_XXX macros to allow for starting with emptiness or ending with emptiness. I will do the same for seq. Of course emptiness does not have to be treated as a tuple or a seq, but will be in my rewritten macros as appropriate. VMD is the place for this functionality since Boost PP does not have to support variadic macros to still work, and testing for emptiness without using variadic macros is too flawed to use IMO.
On Fri, Sep 11, 2015 at 4:13 PM, Edward Diener
VMD is the place for this functionality since Boost PP does not have to support variadic macros to still work, and testing for emptiness without using variadic macros is too flawed to use IMO.
Agreed. I assume the answer is "yes," but are there compilers still in use that we need to worry about and that don't support variadic macros? It's my understanding that most (all?) of the widely-used compilers had preprocessors that supported variadic macros even before C++ technically had them in the language (probably thanks to C99), or are some implementations broken in a way that it makes sense to simply declare that they don't support variadics? Are there any plans to eventually just remove the check and assume variadic support? -- -Matt Calabrese
On 9/11/2015 7:22 PM, Matt Calabrese wrote:
On Fri, Sep 11, 2015 at 4:13 PM, Edward Diener
wrote: VMD is the place for this functionality since Boost PP does not have to support variadic macros to still work, and testing for emptiness without using variadic macros is too flawed to use IMO.
Agreed.
I assume the answer is "yes,"
The answer to what ?
but are there compilers still in use that we need to worry about and that don't support variadic macros? It's my understanding that most (all?) of the widely-used compilers had preprocessors that supported variadic macros even before C++ technically had them in the language (probably thanks to C99), or are some implementations broken in a way that it makes sense to simply declare that they don't support variadics? Are there any plans to eventually just remove the check and assume variadic support?
Boost PP's tests for variadic macro support were written by Paul and were done so that Boost PP does not rely on anything else. I've really wanted to change it so that at least gcc is always marked as supporting variadic macros, which it has actually done since gcc 3+ AFAIK, but the difficulty of separating gcc from other compilers which mimic gcc and define __GNUC__ has kept me from doing so. For everything else I didn't intend to just assume variadic support since the user can do so by a simple definition of BOOST_PP_VARIADICS=1. I do agree that probably nearly every current version of a compiler supports variadic macros, even if c++11 mode and up is not defined during compilation, but who knows what earlier versions of some compilers people are still using with Boost PP ( and/or VMD when it comes out in the next Boost release ) so I would rather not just remove the check and assume variadic macro support. I do understand that just assuming variadic macro support makes writing macros much easier. This is what I have done with VMD. Any suggestions/implementations of new useful macros for VMD, which always assumes variadic macro support, are always welcome.
On Fri, Sep 11, 2015 at 4:04 AM, Paul Mensonides
On 9/10/2015 6:59 PM, Matt Calabrese wrote:
Yeah, I have to do that now with GCC, too, but FWIW the problems I was
having were even before GCC added macro expansion tracking. I'm not sure what GCC does that's so memory intensive during preprocessing.
To some degree it depends on whether the preprocessor is doing what it is supposed to do at all or just something that kinda-sorta results in the same thing. One of the biggest performance hits that I have seen in those preprocessors that do it right (or at least mostly) has to do with what happens with actual arguments to macros.
Consider what happens here:
#define A 1 B #define B 2 C #define C 3 D #define D 4
0 A 5
This *looks* like four nested "calls," but it isn't. Rather, this is a stream. Ignoring the whole disabling context/blue paint for the moment, the replacement process proceeds as follows:
0 A 5 ^ 0 1 B 5 ^ 0 1 B 5 ^ 0 1 2 C 5 ^ 0 1 2 C 5 ^ 0 1 2 3 D 5 ^ 0 1 2 3 D 5 ^ 0 1 2 3 4 5 ^ 0 1 2 3 4 5 ^ 0 1 2 3 4 5 ^
In this scenario, once the scan position has advanced past a preprocessing token, that token is "done". I.e. it is pure output (to standard output, to a file, to the parser, or whatever).
Now change the scenario slightly:
#define M(x) x
M(0 A 5)
In this case, the argument is scanned for macro replacement as an independent scan. That whole scan takes place as above, the result of which is substituted for 'x' in the replacement list, the invocation of 'M' is replaced by that result, the scan position is placed at the first token of that result, and top-level scanning resumes.
The overall difference here is that the presence of the argument (where the argument is used in the replacement list as a non # or ## operand) causes what amounts to a recursive call to the entire scanning for macro replacement process. The results of which must be placed in a buffer, not output, in order to perform the substitution.
So, A defined as B defined as C (and so on) is not recursive, but M(M(M())) usually *is* recursive.
As an immediate testable example, Chaos has a macro that is used to count scans for macro replacement:
#include
#define A(x) B(x) #define B(x) C(x) #define C(x) D(x) #define D(x) E(x) #define E(x) x
CHAOS_PP_HALT( A( CHAOS_PP_DELVE() ) )
The result here is 6 (on entry to A, on entry to B, on entry to C, on entry to D, on entry to E, and finally when top level scanning is resumed).
Change the above to:
#include
#define A(x) B(, x) #define B(p, x) C(, p##x) #define C(p, x) D(, p##x) #define D(p, x) E(, p##x) #define E(p, x) p##x
CHAOS_PP_HALT( A( CHAOS_PP_DELVE() ) )
Now you get 2 (on entry to A and when top level scanning is resumed). This occurs because the placemarker concatenation is stopping the argument from being scanned for macro replacement all the way through most of the structure. (When this type of thing is scaled up, it can be *massively* faster than not doing it.)
None of these scenarios here are actually too bad because the recursion depth is shallow (and by "recursion depth" I mean, of course, the recursive scan depth) so you aren't getting buffers upon buffers, etc.. However, a library built with performance in mind from the ground up (i.e. not Boost.Preprocessor and not Chaos in many cases), has to be built with the above type of stuff in mind.
!
Awesome write-up. This should really be explained somewhere in the
Boost.Preprocessor docs somewhere, just because I imagine that most people,
myself included, really only have a vague understanding of how the
preprocessor is /supposed/ to work. Perhaps it's out of the scope of the
library, as people should just already know it as a part of the language,
but that seems to me to be even less true regarding the preprocessor than
it is regarding the more complicated template rules.
On Fri, Sep 11, 2015 at 4:04 AM, Paul Mensonides
With regard to tuples versus arrays... The answer is really "neither". Both are bad choices for a data structure. Usually they are used for passing multiple auxiliary arguments through some higher-order algorithm. The right solution there is to simply never pack them together in that fashion at all:
#define M(s, n, a, b, c) (a + b + c)
CHAOS_PP_EXPR(CHAOS_PP_REPEAT( 5, M, 1, 2, 3 ))
(i.e. no TUPLE_ELEM in sight)
You can only get random access to tuple elements for a small number of elements. So for any largish container (i.e. where performance matters more), algorithmic processing is usually more efficient with a sequence. In a library like Chaos, it also makes much more efficient use of the available recursion steps because things like folds can be unrolled (because the data structure itself provides the computational horsepower to process itself--even if the algorithm is n^2 or worse).
Okay, that goes along with what I generally do anyway: convert to and deal with sequences behind the scenes. The only thing I find tuple-like containers useful for are user-input. Speaking of Chaos, I know I try to bring this up whenever you make an appearance, but can you please put Chaos up for review? I don't know the status of which compilers can handle it without workarounds, but as long as a compliant compiler can handle it, that's good enough for Boost, and you can just choose to not support broken implementations. One problem that I run into with Boost.Preprocessor is that since I can't "recurse" with BOOST_PP_SEQ_FOR_EACH I have to frequently use some other repetition construct instead, and I'm sure that I'm paying for that decision. This is particularly a problem if the macro I'm developing is to be used by other people who are also writing preprocessor code, meaning that if I choose to use something like BOOST_PP_SEQ_FOR_EACH internally, I'm effectively restricting where people can invoke my macro. IIRC, Chaos allows for all of these looping-like constructs to be used in such a manner, and I think that's really important for non-trivial uses of the preprocessor library. Even as Boost.Preprocessor persists, having Chaos in Boost would be a great addition. -- -Matt Calabrese
Speaking of Chaos, I know I try to bring this up whenever you make an appearance, but can you please put Chaos up for review? I don't know the status of which compilers can handle it without workarounds, but as long as a compliant compiler can handle it, that's good enough for Boost, and you can just choose to not support broken implementations. One problem that I run into with Boost.Preprocessor is that since I can't "recurse" with BOOST_PP_SEQ_FOR_EACH I have to frequently use some other repetition construct instead, and I'm sure that I'm paying for that decision. This is particularly a problem if the macro I'm developing is to be used by other people who are also writing preprocessor code, meaning that if I choose to use something like BOOST_PP_SEQ_FOR_EACH internally, I'm effectively restricting where people can invoke my macro. IIRC, Chaos allows for all of these looping-like constructs to be used in such a manner, and I think that's really important for non-trivial uses of the preprocessor library.
You can still use the deferred expression(like in Chaos) to do recursive `BOOST_PP_SEQ_FOR_EACH_I` calls. It even works in MSVC(although you may need to apply more scans). I showed how in a previous email here: http://boost.2283326.n4.nabble.com/preprocessor-nested-for-each-not-working-... So in the email I write this: #define BOOST_PP_SEQ_FOR_EACH_R_ID() BOOST_PP_SEQ_FOR_EACH_R #define DEFER(x) x BOOST_PP_EMPTY() #define S0 (0)(1)(2)(3) #define S1 (5)(6)(7)(8) #define M4(R, DATA, ELEM) (DATA,ELEM) #define M2(R, DATA, ELEM) DEFER(BOOST_PP_SEQ_FOR_EACH_R_ID)()(R, M4, ELEM, S1); BOOST_PP_EXPAND(BOOST_PP_SEQ_FOR_EACH_R(1, M2, ~, S0)) For your use case of allowing the user to call `BOOST_PP_SEQ_FOR_EACH_I`, you would instead defer the user supplied macro. Now one difference with this is the user will now have to apply an extra scan. In Chaos, all the algorithms are setup in a way that `CHAOS_PP_EXPR` will apply the correct number of scans. For simple cases, `BOOST_PP_EXPAND` maybe enough, however, if the user is calling another deferred algorithm, than additional scans will need to be applied. A simple way to solve this is to just blast it with a bunch of scans like this: #define EVAL(...) EVAL1(EVAL1(EVAL1(__VA_ARGS__))) #define EVAL1(...) EVAL2(EVAL2(EVAL2(__VA_ARGS__))) #define EVAL2(...) EVAL3(EVAL3(EVAL3(__VA_ARGS__))) #define EVAL3(...) EVAL4(EVAL4(EVAL4(__VA_ARGS__))) #define EVAL4(...) EVAL5(EVAL5(EVAL5(__VA_ARGS__))) #define EVAL5(...) __VA_ARGS__ This is simple, but not efficient. If you want efficient, of course, just use Chaos PP. Paul Fultz II -- View this message in context: http://boost.2283326.n4.nabble.com/BOOST-PP-array-extension-proposal-tp46799... Sent from the Boost - Dev mailing list archive at Nabble.com.
-----Original Message----- From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Edward Diener Sent: Thursday, September 10, 2015 17:11
Given that variadic macros are supported for just about all compilers now,
Out of curiosity, do you mean all C++ compilers or all C compilers? The reason I ask is that I've made a C equivalent of STL, using Boost PP. It's just a proof of concept, at this stage (only a few containers and algorithms implemented, so far). It was originally intended for OpenCL kernels, but since C++ support was added in 2.1, it seemed pointless to pursue that objective. However, there are still plenty of pure C codebases out there, and having had to work in a couple, I'd been yearning for the power and convenience of STL. It's still my intention to clean up and release it for C, someday. I realize that this probably falls outside the scope of BOOST_PP, and certainly the larger Boost project. But if/when I do release it, I'd like to retain the broadest compiler compatibility, as many C projects are legacy code for legacy platforms (with legacy tools) and would prefer not to force users to track down and install an old version of BOOST_PP. BTW, is anyone aware of such a library already in existence? Thanks for all the great work that went into BOOST_PP. Before running across it, I never knew the humble C preprocessor was so capable. Matt ________________________________ This e-mail contains privileged and confidential information intended for the use of the addressees named above. If you are not the intended recipient of this e-mail, you are hereby notified that you must not disseminate, copy or take any action in respect of any information contained in it. If you have received this e-mail in error, please notify the sender immediately by e-mail and immediately destroy this e-mail and its attachments.
On 9/13/2015 12:29 PM, Gruenke,Matt wrote:
-----Original Message----- From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Edward Diener Sent: Thursday, September 10, 2015 17:11
Given that variadic macros are supported for just about all compilers now,
Out of curiosity, do you mean all C++ compilers or all C compilers?
I did mean C++ compilers but variadic macros were officially added to C compilers with C99, which was even before they were officially added to C++ compilers.
The reason I ask is that I've made a C equivalent of STL, using Boost PP. It's just a proof of concept, at this stage (only a few containers and algorithms implemented, so far). It was originally intended for OpenCL kernels, but since C++ support was added in 2.1, it seemed pointless to pursue that objective. However, there are still plenty of pure C codebases out there, and having had to work in a couple, I'd been yearning for the power and convenience of STL.
It's still my intention to clean up and release it for C, someday. I realize that this probably falls outside the scope of BOOST_PP, and certainly the larger Boost project. But if/when I do release it, I'd like to retain the broadest compiler compatibility, as many C projects are legacy code for legacy platforms (with legacy tools) and would prefer not to force users to track down and install an old version of BOOST_PP.
I have no intention of turning on varaidic macro support for all compilers in Boost PP and I doubt if Paul does either. The end-user can always turn on variadic macro support for a compiler by defining BOOST_PP_VARIADICS=1 before including Boost PP headers. And yes, Boost PP works for C compilers as well as C++ compilers.
BTW, is anyone aware of such a library already in existence?
if by STL you mean the C++ standard template library I can not imagine how you could do that in straight C.
Thanks for all the great work that went into BOOST_PP. Before running across it, I never knew the humble C preprocessor was so capable.
If it wasn't for VC++, whose preprocessor is decidedly non-standard, as well as a few other compilers which do attempt to implement a C++ standard preprocessor but have some subtle bugs, Boost PP would be much easier to program and even more capable. Paul's Chaos library is an example of what could be done if you don't have to deal with non-conforming C/C++ preprocessors and you are also a genius at writing macros.
-----Original Message----- From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Edward Diener Sent: Sunday, September 13, 2015 21:15 On 9/13/2015 12:29 PM, Gruenke,Matt wrote:
The reason I ask is that I've made a C equivalent of STL, using Boost PP.
BTW, is anyone aware of such a library already in existence?
if by STL you mean the C++ standard template library I can not imagine how you could do that in straight C.
Of course, it's slightly more cumbersome to *use* than STL. You must pass in a bundle of type information to every operation, since the compiler won't do it for you. Constructor, destructor, copy constructor, etc. But it's just as powerful as STL, supporting generic algorithms that can operate on recursive containers of user-defined types. And the resulting composite types have unique C type signatures and can be passed into C functions (i.e. not just macros). As this is rapidly getting off-topic, I'll just post a sample, before moving further discussion off the list. All of the following code compiles (in strict C89, IIRC) and works as advertised. If anyone is interested, I can probably post the nascent library up on github. First, you must define a specific type. To make things easier to read, we bind names to each type signature: #define mytype LIST( SEED( int ) ) /* list of ints */ #define loltype LIST( mytype ) /* list of lists of ints */ DEFINE( mytype ) DEFINE( loltype ) Now that the types are defined, you can make typedefs to refer to them more easily within your C code: typedef TS_TYPENAME( mytype ) mytype_t; typedef TS_TYPENAME( loltype ) loltype_t; In fact, let's write some functions to print them: void printitem( int val, const char **sep ) { printf( "%s%d", *sep, val ); *sep = " "; } void printlist( mytype_t *l ) { const char *empty = ""; const char **sep = ∅ FOR_EACH( mytype, *l, printitem, sep ); } Let's also wrap the generic algorithm FIND() with some diagnostic code: void find( mytype_t *l, int val ) { ITER_TYPENAME( ITER( mytype ) ) i = END( mytype, *l ); FIND( mytype, *l, val, i ); printf( "%s %d\n", (i != END( mytype, *l ) ? "found" : "didn't find"), val ); } Finally, instantiate the generic Bubble Sort. Since this expands into a function definition, there's one macro for declaring it (e.g. in a header file) and another for actually defining it (i.e. for use in just one of your .c files): DECLARE_BUBBLESORT( loltype ); DEFINE_BUBBLESORT( loltype ) Let's create macros for referring to instances: #define myvar_v VAR( mytype, myvar ) #define lolvar_v VAR( loltype, lolvar ) And now instantiate them: INST_VAR( myvar_v ); INST_VAR( lolvar_v ); To initialize an instance, you call the generic INIT() macro. Likewise, CLEANUP() does what the name implies. Now, a big, long main() function to test it out! Note the generic COPY() function, as well as alternate versions of INST(), INIT(), and CLEANUP() - one set takes a pair of type and instance names, while the other takes a VAR() expression. int main() { # define yourvar_v VAR( mytype, yourvar ) INST( mytype, yourvar ); /* equivalent to INST_VAR( VAR( mytype, yourvar ) ) */ INST( loltype, lolcopy ); ITER_TYPENAME( ITER( loltype ) ) i_lol; INIT_VAR( myvar_v ); INIT_VAR( lolvar_v ); printf( "size() %d\n", SIZE_VAR( myvar_v ) ); PUSH_BACK_VAR( myvar_v, 1 ); PUSH_BACK_VAR( myvar_v, 2 ); PUSH_BACK_VAR( myvar_v, 3 ); PUSH_BACK_VAR( lolvar_v, myvar ); PUSH_BACK_VAR( lolvar_v, myvar ); INIT_VAR( yourvar_v ); COPY( mytype, yourvar, myvar ); INIT( loltype, lolcopy ); COPY( loltype, lolcopy, lolvar ); printf( "size( mycopy ): %d\n", SIZE_VAR( yourvar_v ) ); printf( "size( lolcopy ): %d\n", SIZE( loltype, lolcopy ) ); printf( "front = %d back = %d\n", FRONT_VAR( yourvar_v ), BACK_VAR( yourvar_v ) ); find( &yourvar, 0 ); find( &VAR_INST( yourvar_v ), 1 ); find( &yourvar, 2 ); find( &yourvar, 17 ); PUSH_BACK_VAR( myvar_v, 4 ); printf( "Eq(): %d\n", EQ( mytype, myvar, yourvar ) ); printf( "Lt(): %d\n", LT( mytype, myvar, yourvar ) ); printf( "Gt(): %d\n", LT( mytype, yourvar, myvar ) ); PUSH_BACK( mytype, FRONT( loltype, lolcopy ), 4 ); printf( "Eq(): %d\n", EQ( loltype, lolvar, lolcopy ) ); printf( "Lt(): %d\n", LT( loltype, lolvar, lolcopy ) ); printf( "Gt(): %d\n", LT( loltype, lolcopy, lolvar ) ); i_lol = END( loltype, lolcopy ); FIND( loltype, lolcopy, myvar, i_lol ); if (i_lol != END( loltype, *lolcopy )) { printf( "found the list: " ); printlist( &DEREF( ITER( loltype ), i_lol ) ); printf( "\n" ); } else printf( "didn't find the list.\n" ); printf( "FRONT( lolcopy ): " ); printlist( &FRONT( loltype, lolcopy ) ); printf( "\n" ); /* this is redundant, but it shows how to sort locally or via an instantiated version. */ BUBBLESORT( loltype, lolcopy ); CALL_BUBBLESORT( loltype, lolcopy ); printf( "FRONT( lolcopy ): " ); printlist( &FRONT( loltype, lolcopy ) ); printf( "\n" ); CLEANUP_VAR( myvar_v ); printf( "size(): %d\n", SIZE_VAR( myvar_v ) ); CLEANUP_VAR( yourvar_v ); printf( "size(): %d\n", SIZE_VAR( yourvar_v ) ); CLEANUP( loltype, lolvar ); printf( "size(): %d\n", SIZE_VAR( myvar_v ) ); CLEANUP( loltype, lolcopy ); printf( "size(): %d\n", SIZE_VAR( yourvar_v ) ); return 0; } Matt ________________________________ This e-mail contains privileged and confidential information intended for the use of the addressees named above. If you are not the intended recipient of this e-mail, you are hereby notified that you must not disseminate, copy or take any action in respect of any information contained in it. If you have received this e-mail in error, please notify the sender immediately by e-mail and immediately destroy this e-mail and its attachments.
participants (6)
-
Antoine Leblanc
-
Edward Diener
-
Gruenke,Matt
-
Matt Calabrese
-
Paul Fultz II
-
Paul Mensonides