On 01/10/2017 04:22 PM, Mathias Gaunard wrote:
On 10 January 2017 at 20:07, Zach Laine
wrote: I agree with all of these complaints. This is in fact why I wrote Yap. The compile times are very good, even for an obnoxious number of terminals (thousands), and there is no memory allocated at all -- or did you mean compiler memory usage? I haven't looked at that.
Compiler memory usage of course. When a TU takes 4GB to compile or more, it leads to lots of problems, even if RAM is cheap and you could put hundreds of gigabytes in your build server.
Zach, w.r.t. compile-time benchmarking, Louis has a compiler benchmark library here: https://github.com/ldionne/metabench I did have a brief look at it; however, I didn't see an easy way to vary parameters of the benchmark, for example, the size of expressions, and compare the results (it uses embedded ruby, and I couldn't figure out from that how to do what I want). Instead, I resorted to gmake for loops, as shown here: https://github.com/cppljevans/spirit/blob/get_rhs/workbench/x3/rule_defns/Ma... resulting in output shown here: https://github.com/cppljevans/spirit/blob/get_rhs/workbench/x3/rule_defns/be... Interestingly enough, the method that performed worse was the one (RULE2RHS_CTX_LIST) which stored the rule2rhs information in the context. In contrast, the GET_RHS_CRTP stored this information by overloading functions generated by macros. What's interesting is that, based on what Brook Milligan says in his message, proto uses a context as well, but your library uses a set of function overloads. With regard to the gmake method of comparing compiler times, I realize that's sorta kludgy, and, years ago, I used a series of python programs to do the equivalent. I you find the gmake-for-loop method unacceptable, I can try to find the python method instead. HTH. -regards, Larry