Given that preprocessor checks are often used for compiler workarounds, and pcpp is not a full C++ frontend, one would have to make sure pcpp defines the same set of predefined macros the compiler does. In the particular case of MSVC that would make libraries like Boost.PP and Boost.VMD treat pcpp the same way they do MSVC, which is probably suboptimal, if at all functional. I guess, for such tandem to be workable, pcpp has to define its own predefined macros, and PP and VMD have to test it before they test other compiler-specific macros.
You've struck exactly at the main use case for pcpp, and why I didn't start from Wave or an existing preprocessor but instead reinvented the wheel and in Python, not C. I specifically am implementing a "partially executing" pre-preprocessor which can be programmatically and dynamically instructed to transform a set of input files with preprocessing commands into other files with some of those preprocessor commands executed or expanded or replaced, and some passed through. pcpp by default acts as a straight preprocessor, but it can also be told to pass through #if logic it can't fully execute due to unknowns, or partially execute #if logic. It can be told to pass through #define and #undef but also execute them (or not) on a per-macro basis. It would be easy enough to tell it to not execute any preprocessing commands except #include and include guards for example. I'm sure you can see the big benefits to pregenerating canned preprocessed files such that #including a Boost library is much faster than before because most of the preprocessing work is already done. Right now generating those is tedious work using hacky scripts on source files with splicing metadata injected via comments etc which are brittle. pcpp will allow for a FAR more robust solution which can be safely left to a CI to run per commit if desired. pcpp is basically Facebook's Warp (https://github.com/facebookarchive/warp) but done much more flexibly and usefully (no offence intended to Warp's developers, but Warp isn't very useful outside a very limited use case). My ideal end goal is for a download page for a Boost library to provide a set of tick boxes and drop down menus that let a user pre-preprocess a Boost library into a custom "drop in and go" single file edition for their particular use case, just like you can with say jQuery downloads. Again choosing Python instead of C makes that safe and secure. I don't know if I'll have the time to reach that, but I'm an awful lot closer now than I was a month ago.
Still merely to run the Boost PP and Boost VMD tests, which are decent tests for much hardcore C++ standard preprocessor conformance, pcpp could minimally define __cplusplus >= 201103L or __STDC_VERSION__ >= 199901L, without necessarily identifying itself otherwise, and Boost PP/Boost VMD will treat the preprocessor as strictly C++ standard conformant with variadic macro support. But what you write above is certainly correct in general, in order to test other Boost libraries and end-user's code, with pcpp as the preprocessor for some other compiler.
BTW Boost already has an almost complete conformant C++ preprocesor in Boost Wave. I could not have developed VMD or helped support Boost PP without its ability to show correct macro expansion. It has been absolutely invaluable in that respect.
During the use case needs analysis for this mini-project which I conducted on Reddit (https://www.reddit.com/r/cpp/comments/5ss6cv/any_interest_in_a_python_c99_preprocessor/?st=izzekruw&sh=5e1177c9), I discounted Wave as I felt its implementation was not flexible enough for this use case of dynamic rewriting. Also an implementation written in C or C++ cannot be dynamically driven by a build process like a Python implementation can e.g. cmake can write Python script into a file and inject that into pcpp. I am also *extremely* sure that I could not have developed a conforming C preprocessor in Boost.Spirit in just eighty hours of work. Correct recursive function macro expansion turned out to be a real head scratcher, I ended up solving it using a token colouring approach. Python is so much more a productivity language and ecosystem than C++, you can write high performance high quality code so much quicker because the productivity support in the ecosystem is vastly superior. I really wish WG21 and the Standard C++ Foundation approached C++ evolution like the Python Software Foundation does, but well my views on that are very well understood by now. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/