Interest in new parsing library?
I'm trying to gauge interest in a parsing library to replace Boost.Spirit 2/Spirit X3. I'm also looking for endorsements. The library is intended to remedy some shortcomings of Boost.Spirit*. I think these are great libraries, but Spirit 2 was written in pre-11 C++ (I think; certainly its dependencies were). Most-to-all of the downsides stem from that -- long compile times, inscrutable compilation failures, etc. (Boost.Parser compile times are quite low.) I'm calling my proposal Boost.Parser, and it follows many of the conventions of Boost.Spirit 2 and X3, such as the operators used for overloading, the names of many parsers and directives, etc. It requires C++17 or later.
From the introduction in the online docs: """ Boost.Parser is a parser combinator library. That is, it consists of a set of low-level primitive parsers, and operations that can be used to combine those parsers into more complicated parsers.
There are primitive parsers that parse epsilon (the empty string), chars, ints, floats, etc. There are operations which combine parsers to create new parsers. For instance, the Kleene star operation takes an existing parser p and creates a new parser that matches zero or more occurrences of whatever p matches. Both callable objects and operator overloads are used for the combining operations. For instance, operator*() is used for Kleene star, and you can also write repeat(n)[p] to create a parser for exactly n repetitions of p. Boost.Parser also tries to accommodate the multiple ways that people often want to get a parse result out of their parsing code. Some parsing may best be done by returning an object that represents the result of the parse. Other parsing may best be done by filling in a preexisting data structure. Yet other parsing may best be done by parsing small sections of a large document, and reporting the results of subparsers as they are finished, via callbacks. Boost.Parser accommodates all these ways of working, and even makes it possible to do callback-based or non-callback-based parsing without rewriting any code (except by changing the top-level call from parse() to callback_parse()). All of Boost.Parser's public interfaces are sentinel- and range-friendly, just like the interfaces in std::ranges. Boost.Parser is Unicode-aware through and through. When you parse ranges of char, Boost.Parser does not assume any particular encoding — not Unicode or any other encoding. Parsing of inputs other than plain chars assumes that the input is Unicode. In the Unicode-aware code paths, all parsing is done by matching code points. This means that you can feed UTF-8 strings into Boost.Parser, both as input and within your parser, and the right sort of matching occurs. For instance, if your parser is trying to match repetitions of the char '\xcc' (which is a lead byte from a UTF-8 sequence, and so is malformed UTF-8 if not followed by an appropriate UTF-8 code unit), it will not match the start of "\xcc\x80" (UTF-8 for the code point U+0300). Boost.Parser knows that the matching must be whole-code-point, and so it interprets the char '\xcc' as the code point U+00CC. Error reporting is important to get right, and it is important to make errors easy to understand, especially for end-users. Boost.Parser produces runtime parse error messages that are very similar to the diagnostics that you get when compiling with GCC and Clang (it even supports warnings that don't fail the parse). The exact token associated with a diagnostic can be reported to the user, with the containing line quoted, and with a marker pointing right at the token. Boost.Parser takes care of this for you; your parser does not need to include any special code to make this happen. Of course, you can also replace the error handler entirely, if it doesn't fit your needs. Debugging complex parsers can be a real nightmare. Boost.Parser makes it trivial to get a trace of your entire parse, with easy-to-read (and very verbose) indications of where each part of the trace is within the parse, the state of values produced by the parse, etc. Again, you don't need to write any code to make this happen — you just pass a parameter to parse(). Dependencies are still a nightmare in C++, so Boost.Parser can be used as a purely standalone library, independent of Boost. """ Boost.Parser aims to be a superset of Boost.Spriit* in most ways. Major things missing from the set of features in Spirit 2 + Spirit X3 are: - A separate lexer. - Binary parsers (meaning for parsing bits, not binary numbers written as text; the latter is fully supported). I've been in touch with Joel de Guzman, Hartmut Kaiser, and Michael Caisse, to make sure I was not toe-stomping, for those who are concerned about that. They gave this new library their blessing. One feature comes entirely from them: Boost.Parser is usable in a Boost-free environment -- as a standalone library -- at the user's option. They said that was the #1 request from users, which surprised me a bit. The Github page is here: https://github.com/tzlaine/parser The online docs are here: https://tzlaine.github.io/parser To see an extended example, here's a JSON parser that passes all the published JSON tests, including most of the optional ones, in only about 300 lines of code, go here: https://tzlaine.github.io/parser/doc/html/boost_parser__proposed_/extended_e... Finally, for those wanting to know how this lib differs from Boost.Spirit* without digging through the docs, here is the doc page that explains Boost.Parser's relationship to Boost.Spirit*: """ Boost.Spirit is a library that is already in Boost, and it has been around for a long time. However, it does not suit user needs in some ways. Spirit 2 suffers from very long compile times. Spirit 2 has error reporting that requires a lot of user intervention to work. Spirit 2 requires user intervention, including a (long) recompile, to enable parse tracing. Spirit X3 has rules that do not compose well — the attributes produced by a rule can change depending on the context in which you use the rule. Spirit X3 is missing many of the convenient interfaces to parsers that Spirit 2 had. For instance, you cannot add parameters to a parser. All versions of Spirit have Unicode support, but it is quite difficult to get working. I wanted a library that does not suffer from any of the above limitations. It should be noted that while Spirit X3 only has a couple of flaws in the list above, the one related to rules is a deal-breaker. The ability to write rules, test them in isolation, and then re-use them throughout a complex parser is essential. Though no version of Boost.Spirit (Spirit 2 or Spirit X3) suffers from all those limitations, there also does not exist any one version that avoids all of them. Boost.Parser does so. However, there are a lot of great ideas in Boost.Spirit that have been retained in Boost.Parser. Both libraries: - use the same operator overloads to combine parsers; - use approximately the same set of directives to influence the parse (e.g. lexeme[]); - provide loosely-coupled rules that are separately compilable (at least for Spirit X3); and - are built around a flexible parse context object that has state added to and removed from it during the parse (again, comparing to Spirit X3). """ Zach
Zach Laine wrote:
Dependencies are still a nightmare in C++, so Boost.Parser can be used as a purely standalone library, independent of Boost. ... The presence of Boost headers is detected using __has_include().
That's not a good idea. Depending on random features of the environment for ABI-incompatible changes will create many more problems than it would solve. Just use a macro.
On Thu, Dec 28, 2023 at 5:42 PM Peter Dimov via Boost
Zach Laine wrote:
Dependencies are still a nightmare in C++, so Boost.Parser can be used as a purely standalone library, independent of Boost. ... The presence of Boost headers is detected using __has_include().
That's not a good idea. Depending on random features of the environment for ABI-incompatible changes will create many more problems than it would solve. Just use a macro.
That might be true in general, but I don;t think it is given how I'm
using them. __has_include is only used in 4 places:
1) To detect Boost.Preprocessor headers. If detected, Boost.Parser
defines a convenience macro that uses Boost.Preprocessor macros.
2) To detect <coroutine>. If detected, it is included; this is
necessary because at least one implementation reported
__cpp_impl_coroutine but did not provide the header. This is cruft
that I can remove, but is otherwise harmless.
3) To detect
Zach Laine wrote:
1) To detect Boost.Preprocessor headers. If detected, Boost.Parser defines a convenience macro that uses Boost.Preprocessor macros.
Feel free to steal https://github.com/boostorg/describe/blob/develop/include/boost/describe/det...
On Fri, Dec 29, 2023 at 2:46 PM Peter Dimov via Boost
Zach Laine wrote:
1) To detect Boost.Preprocessor headers. If detected, Boost.Parser defines a convenience macro that uses Boost.Preprocessor macros.
Feel free to steal
https://github.com/boostorg/describe/blob/develop/include/boost/describe/det...
Nice! I had no idea this existed. Thanks, I'll use this instead. Zach
czw., 28 gru 2023 o 22:05 Zach Laine via Boost
I'm trying to gauge interest in a parsing library to replace Boost.Spirit 2/Spirit X3. I'm also looking for endorsements.
The library is intended to remedy some shortcomings of Boost.Spirit*. I think these are great libraries, but Spirit 2 was written in pre-11 C++ (I think; certainly its dependencies were). Most-to-all of the downsides stem from that -- long compile times, inscrutable compilation failures, etc. (Boost.Parser compile times are quite low.)
I'm calling my proposal Boost.Parser, and it follows many of the conventions of Boost.Spirit 2 and X3, such as the operators used for overloading, the names of many parsers and directives, etc. It requires C++17 or later.
From the introduction in the online docs: """ Boost.Parser is a parser combinator library. That is, it consists of a set of low-level primitive parsers, and operations that can be used to combine those parsers into more complicated parsers.
There are primitive parsers that parse epsilon (the empty string), chars, ints, floats, etc.
There are operations which combine parsers to create new parsers. For instance, the Kleene star operation takes an existing parser p and creates a new parser that matches zero or more occurrences of whatever p matches. Both callable objects and operator overloads are used for the combining operations. For instance, operator*() is used for Kleene star, and you can also write repeat(n)[p] to create a parser for exactly n repetitions of p.
Boost.Parser also tries to accommodate the multiple ways that people often want to get a parse result out of their parsing code. Some parsing may best be done by returning an object that represents the result of the parse. Other parsing may best be done by filling in a preexisting data structure. Yet other parsing may best be done by parsing small sections of a large document, and reporting the results of subparsers as they are finished, via callbacks. Boost.Parser accommodates all these ways of working, and even makes it possible to do callback-based or non-callback-based parsing without rewriting any code (except by changing the top-level call from parse() to callback_parse()).
All of Boost.Parser's public interfaces are sentinel- and range-friendly, just like the interfaces in std::ranges.
Boost.Parser is Unicode-aware through and through. When you parse ranges of char, Boost.Parser does not assume any particular encoding — not Unicode or any other encoding. Parsing of inputs other than plain chars assumes that the input is Unicode. In the Unicode-aware code paths, all parsing is done by matching code points. This means that you can feed UTF-8 strings into Boost.Parser, both as input and within your parser, and the right sort of matching occurs. For instance, if your parser is trying to match repetitions of the char '\xcc' (which is a lead byte from a UTF-8 sequence, and so is malformed UTF-8 if not followed by an appropriate UTF-8 code unit), it will not match the start of "\xcc\x80" (UTF-8 for the code point U+0300). Boost.Parser knows that the matching must be whole-code-point, and so it interprets the char '\xcc' as the code point U+00CC.
Error reporting is important to get right, and it is important to make errors easy to understand, especially for end-users. Boost.Parser produces runtime parse error messages that are very similar to the diagnostics that you get when compiling with GCC and Clang (it even supports warnings that don't fail the parse). The exact token associated with a diagnostic can be reported to the user, with the containing line quoted, and with a marker pointing right at the token. Boost.Parser takes care of this for you; your parser does not need to include any special code to make this happen. Of course, you can also replace the error handler entirely, if it doesn't fit your needs.
Debugging complex parsers can be a real nightmare. Boost.Parser makes it trivial to get a trace of your entire parse, with easy-to-read (and very verbose) indications of where each part of the trace is within the parse, the state of values produced by the parse, etc. Again, you don't need to write any code to make this happen — you just pass a parameter to parse().
Dependencies are still a nightmare in C++, so Boost.Parser can be used as a purely standalone library, independent of Boost. """
Boost.Parser aims to be a superset of Boost.Spriit* in most ways. Major things missing from the set of features in Spirit 2 + Spirit X3 are:
- A separate lexer. - Binary parsers (meaning for parsing bits, not binary numbers written as text; the latter is fully supported).
I've been in touch with Joel de Guzman, Hartmut Kaiser, and Michael Caisse, to make sure I was not toe-stomping, for those who are concerned about that. They gave this new library their blessing. One feature comes entirely from them: Boost.Parser is usable in a Boost-free environment -- as a standalone library -- at the user's option. They said that was the #1 request from users, which surprised me a bit.
The Github page is here: https://github.com/tzlaine/parser The online docs are here: https://tzlaine.github.io/parser
To see an extended example, here's a JSON parser that passes all the published JSON tests, including most of the optional ones, in only about 300 lines of code, go here:
https://tzlaine.github.io/parser/doc/html/boost_parser__proposed_/extended_e...
Finally, for those wanting to know how this lib differs from Boost.Spirit* without digging through the docs, here is the doc page that explains Boost.Parser's relationship to Boost.Spirit*: """ Boost.Spirit is a library that is already in Boost, and it has been around for a long time.
However, it does not suit user needs in some ways.
Spirit 2 suffers from very long compile times. Spirit 2 has error reporting that requires a lot of user intervention to work. Spirit 2 requires user intervention, including a (long) recompile, to enable parse tracing. Spirit X3 has rules that do not compose well — the attributes produced by a rule can change depending on the context in which you use the rule. Spirit X3 is missing many of the convenient interfaces to parsers that Spirit 2 had. For instance, you cannot add parameters to a parser. All versions of Spirit have Unicode support, but it is quite difficult to get working. I wanted a library that does not suffer from any of the above limitations. It should be noted that while Spirit X3 only has a couple of flaws in the list above, the one related to rules is a deal-breaker. The ability to write rules, test them in isolation, and then re-use them throughout a complex parser is essential.
Though no version of Boost.Spirit (Spirit 2 or Spirit X3) suffers from all those limitations, there also does not exist any one version that avoids all of them. Boost.Parser does so. However, there are a lot of great ideas in Boost.Spirit that have been retained in Boost.Parser. Both libraries:
- use the same operator overloads to combine parsers; - use approximately the same set of directives to influence the parse (e.g. lexeme[]); - provide loosely-coupled rules that are separately compilable (at least for Spirit X3); and - are built around a flexible parse context object that has state added to and removed from it during the parse (again, comparing to Spirit X3). """
Hi Zach, Thank you for writing and sharing this library. I intend to test it on my mini-language early next year. For now, let me dig a bit about the high-level differences between Boost.Parser and Boost.SpiritX3. Your introduction mentions "a separate lexer" as a feature that Boost.Spirit is missing. How does that square with the entire section for Spirit.Lex in Boost.Spirit docs? "Boost.Parser aims to be a superset of Boost.Spirit". But Boost.Spirit is also a generator. You mention that "Spirit X3 has rules that do not compose well". I personally never experienced this. Is there an example somewhere that would illustrate this problem? What is the recommendation of Boost.Spirit authors to the programmers that need to do parsing? Is Boost.Parser simply the newer and improved version, or do they have disjoint sets of use cases? Personally, skimming through the docs, I find the feature of producing custom error and warning messages very attractive. This is what I was always missing from the parsing libraries. Thanks again for your effort. Regards, &rzej;
On 12/29/23 5:52 PM, Andrzej Krzemienski via Boost wrote:
czw., 28 gru 2023 o 22:05 Zach Laine via Boost
napisał(a): I'm trying to gauge interest in a parsing library to replace Boost.Spirit 2/Spirit X3. I'm also looking for endorsements.
The library is intended to remedy some shortcomings of Boost.Spirit*. I think these are great libraries, but Spirit 2 was written in pre-11 C++ (I think; certainly its dependencies were). Most-to-all of the downsides stem from that -- long compile times, inscrutable compilation failures, etc. (Boost.Parser compile times are quite low.)
I'm calling my proposal Boost.Parser, and it follows many of the conventions of Boost.Spirit 2 and X3, such as the operators used for overloading, the names of many parsers and directives, etc. It requires C++17 or later.
[snip]
[snip]
What is the recommendation of Boost.Spirit authors to the programmers that need to do parsing? Is Boost.Parser simply the newer and improved version, or do they have disjoint sets of use cases?
Personally, skimming through the docs, I find the feature of producing custom error and warning messages very attractive. This is what I was always missing from the parsing libraries.
Thanks again for your effort.
Hello Y'all, I support this endeavor. Zach already discussed this with Hartmut and I a year or so ago. Unfortunately, I'm no longer able to dedicate time to supporting Boost.Spirit after more than two decades. Nikita currently serves as the maintainer and has been incredibly noteworthy, with the extent of his assistance. But he does not seem to be active recently. So unless somone steps up to maintain Boost.Spirit, perhaps it's time to retire the library :-( Regards, -- Joel
Zach Laine wrote: ...
I'm calling my proposal Boost.Parser, and it follows many of the conventions of Boost.Spirit 2 and X3, such as the operators used for overloading, the names of many parsers and directives, etc. It requires C++17 or later. ...
The Github page is here: https://github.com/tzlaine/parser The online docs are here: https://tzlaine.github.io/parser
Some observations: I understand, in principle, the motivation behind asserting at runtime instead of failing compilation, but I don't think the same argument applies to rejecting *eps parsers. It seems to me that a static assert for any *p or +p where p can match epsilon (can succeed while consuming no input) would be clear enough. (E.g. +-p, *(p | q | eps), *attr(...), +&p, etc.) Interestingly, this would reject **p and +*p, because these parsers can go into an infinite loop. The current behavior is to collapse them into *p, which is useful, but technically wrong. This raises the possibility of, instead of rejecting *p or +p when p can match epsilon, just 'fixing' its behavior so that when p matches epsilon, the outer parser just exits the loop. This will make the current collapsing behavior equivalent to the non-collapsed one. Also, errors should definitely go to std::cerr by default, not std::cout. Errors aren't program output, and routing them to stdout is script-hostile.
I'm incredibly interested in these binary parsers. Where in the documentation are they located exactly? And what about examples? I was toying with writing a binary protocol parser and would love to use this library to write it. - Christian
Also, it'd be good to get some benchmarks going, showcasing the JSON parsing from this library compared to Boost.JSON's benchmark suite. - Christian
On Fri, Dec 29, 2023 at 9:29 AM Christian Mazakas via Boost < boost@lists.boost.org> wrote:
Also, it'd be good to get some benchmarks going, showcasing the JSON parsing from this library compared to Boost.JSON's benchmark suite.
A naive implementation of the benchmark will be grossly unfair to Boost.Parser. To do this correctly it would require that the Boost.Parser implementation use the same containers as Boost.JSON, as they are optimized for JSON-specific workloads. And it would require the implementation to use the explicit internal stack model adopted by Boost.JSON (copied from RapidJSON). That is a non-trivial amount of work for which the value proposition is unclear; Boost.Parser is designed to work with generic containers while Boost.JSON's parser is tuned to work with the json::value container that comes with the library. Of course, if someone else wants to do the work to perform such a comparison, I certainly wouldn't discourage them :) Thanks
On Fri, Dec 29, 2023 at 11:49 AM Vinnie Falco via Boost
On Fri, Dec 29, 2023 at 9:29 AM Christian Mazakas via Boost < boost@lists.boost.org> wrote:
Also, it'd be good to get some benchmarks going, showcasing the JSON parsing from this library compared to Boost.JSON's benchmark suite.
A naive implementation of the benchmark will be grossly unfair to Boost.Parser.
To do this correctly it would require that the Boost.Parser implementation use the same containers as Boost.JSON, as they are optimized for JSON-specific workloads. And it would require the implementation to use the explicit internal stack model adopted by Boost.JSON (copied from RapidJSON).
That is a non-trivial amount of work for which the value proposition is unclear; Boost.Parser is designed to work with generic containers while Boost.JSON's parser is tuned to work with the json::value container that comes with the library.
Of course, if someone else wants to do the work to perform such a comparison, I certainly wouldn't discourage them :)
I agree completely. I think you will find that Boost.Parser is pretty fast. However, it's a general-purpose parsing lib, and so it is never going to catch up to a bespoke parser like Boost.JSON's. Any lovingly hand-craften parser is almost certain to beat a very general parser combinator lib. Zach
On Fri, Dec 29, 2023 at 10:50 AM Christian Mazakas via Boost
I'm incredibly interested in these binary parsers. Where in the documentation are they located exactly? And what about examples?
I was toying with writing a binary protocol parser and would love to use this library to write it.
I really stated that awkwardly it seems. What I was trying to say is that there are no binary (bit) parsers in Boost.Parser. There *are* in Boost.Spirit. Both libraries can parse binary numbers written as text. Zach
On Fri, Dec 29, 2023 at 10:35 AM Peter Dimov via Boost
Zach Laine wrote: ...
I'm calling my proposal Boost.Parser, and it follows many of the conventions of Boost.Spirit 2 and X3, such as the operators used for overloading, the names of many parsers and directives, etc. It requires C++17 or later. ...
The Github page is here: https://github.com/tzlaine/parser The online docs are here: https://tzlaine.github.io/parser
Some observations:
I understand, in principle, the motivation behind asserting at runtime instead of failing compilation, but I don't think the same argument applies to rejecting *eps parsers. It seems to me that a static assert for any *p or +p where p can match epsilon (can succeed while consuming no input) would be clear enough. (E.g. +-p, *(p | q | eps), *attr(...), +&p, etc.)
Why? It may be better to static_assert, but it's not clear to me why
Interestingly, this would reject **p and +*p, because these parsers can go into an infinite loop. The current behavior is to collapse them into *p, which is useful, but technically wrong. This raises the possibility of, instead of rejecting *p or +p when p can match epsilon, just 'fixing' its behavior so that when p matches epsilon, the outer parser just exits the loop. This will make the current collapsing behavior equivalent to the non-collapsed one.
At first, I thought this was a great idea. Now I'm ambivalent. The way I might implement this is in repeat_parser (that's the only looping parser, modulo its subclasses). I could then do a couple of things: 1) detect that we have not eaten any of the input, but have matched repeat_parser's subparser, and terminate the repetition; or 2) detect that we have matched repeat_parser's subparser, *and* that the subparser is an unconditional match. #1 is nice, because you don't need any way of tagging parser types as being epsilon-like. Without this or some similar approach you could end up with a closed set of types that trigger this short-circuiting. This seems like a maintenance problem for me, but moreover an extensibility problem for users. #2 suffers from this closed-set problem. To fix #2, I could add a template param (or constexpr static member, same diff), that acts as a tag. #1 is problematic though, and anything where the no-input-consuming match is conditional is equally problematic. Each parser could have arbitrary side effects, via semantic actions. So this parser: *(if_(c)[p] | eps[a]) Could match the eps first, if 'c' evaluated to false, and later match 'p', depending on what 'a' does. If 'a' flips the value of 'c', then the parse will always match 'p'. If 'a' increments a counter, then the parse might eventually match 'p', but just take a long time to do it; this case might also result in an infinite loop. In the case of the increment that ends in a match, maybe 'a' increments a counter, but also does some other important side effect. This may be a useful pattern to someone, somewhere. This is obviously contrived, but the point is that there are currently some things that you can express that would become non-expressible. tl;dr I like the idea, but I'm struggling with how to do it so that we don't limit expressivity.
Also, errors should definitely go to std::cerr by default, not std::cout. Errors aren't program output, and routing them to stdout is script-hostile.
Ach! Yeah, that's just an oversight. I've opened a ticket, thanks. Zach
On 12/28/23 1:05 PM, Zach Laine via Boost wrote:
I'm trying to gauge interest in a parsing library to replace Boost.Spirit 2/Spirit X3. I'm also looking for endorsements.
The library is intended to remedy some shortcomings of Boost.Spirit*. I think these are great libraries, but Spirit 2 was written in pre-11 C++ (I think; certainly its dependencies were). Most-to-all of the downsides stem from that -- long compile times, inscrutable compilation failures, etc. (Boost.Parser compile times are quite low.)
I'm calling my proposal Boost.Parser, and it follows many of the conventions of Boost.Spirit 2 and X3, such as the operators used for overloading, the names of many parsers and directives, etc. It requires C++17 or later.
Very interesting. The Boost Serialization library has used Boost Spirit x1 to parse xml archives for over 20 years. I had some familiarity with recursive descent parsing and relatively little with template metaprogramming which made for a strenuous learning experience. The library comes with incredible documentation. Once I got the hang of it, I was/am happy with it. I haven't touched it in 20 years. I totally and completely forgot everything about it. This is a testament to the capability of it's author. The concept of separating the grammar specification from token handlers made the actual code almost nothing. I've looked at the comments here and a little at the document. One question: Is there a reason it shouldn't be named Spirit4 ? Robert Ramey
On Sat, Dec 30, 2023 at 1:33 PM Robert Ramey via Boost
On 12/28/23 1:05 PM, Zach Laine via Boost wrote:
I'm trying to gauge interest in a parsing library to replace Boost.Spirit 2/Spirit X3. I'm also looking for endorsements.
The library is intended to remedy some shortcomings of Boost.Spirit*. I think these are great libraries, but Spirit 2 was written in pre-11 C++ (I think; certainly its dependencies were). Most-to-all of the downsides stem from that -- long compile times, inscrutable compilation failures, etc. (Boost.Parser compile times are quite low.)
I'm calling my proposal Boost.Parser, and it follows many of the conventions of Boost.Spirit 2 and X3, such as the operators used for overloading, the names of many parsers and directives, etc. It requires C++17 or later.
Very interesting.
The Boost Serialization library has used Boost Spirit x1 to parse xml archives for over 20 years. I had some familiarity with recursive descent parsing and relatively little with template metaprogramming which made for a strenuous learning experience. The library comes with incredible documentation. Once I got the hang of it, I was/am happy with it. I haven't touched it in 20 years. I totally and completely forgot everything about it. This is a testament to the capability of it's author. The concept of separating the grammar specification from token handlers made the actual code almost nothing. I've looked at the comments here and a little at the document. One question:
Is there a reason it shouldn't be named Spirit4 ?
Not in particular. Joel and Hartmut even said they would be fine with that. I just prefer a name that is more direct in its meaning, and that reflects that it is from a new author. Zach
participants (7)
-
Andrzej Krzemienski
-
Christian Mazakas
-
Joel de Guzman
-
Peter Dimov
-
Robert Ramey
-
Vinnie Falco
-
Zach Laine