Boost review of the 'convert' library starts tomorrow May 21.
The review of the Convert library starts tomorrow Monday, May 12 and goes through Wednesday May 21. The Convert library builds on the boost::lexical_cast original design and experience and takes those conversion/transformation-related ideas further. * to be applicable to a wider range of conversion-related use-cases, * to provide a more flexible, extendible and configurable type-conversion framework. The Convert library can be cloned from GitHub at https://github.com/yet-another-user/boost.convert. The library follows the modular-boost format. Just clone it to modular-boost/libs in the 'convert' subdirectory and run 'b2 headers' in order to create a link to its header file directory in the modular-boost/boost subdirectory. The library comes with documentation in its top-level index.html or doc/html/index.html file. The library is by Vladimir Batov and is a second reiteration with a greater focus of his library that was reviewed in the past. I am Edward Diener and I am again serving as the review manager of the library. Comments, questions, and reviews will all be welcome for the library. Please try to sum up your review by answering these questions: What is your evaluation of the design? What is your evaluation of the implementation? What is your evaluation of the documentation? What is your evaluation of the potential usefulness of the library? Did you try to use the library? With what compiler? Did you have any problems? How much effort did you put into your evaluation? A glance? A quick reading? In-depth study? Are you knowledgeable about the problem domain? And finally: Do you think the library should be accepted as a Boost library? As always whether you do or do not feel that the library should be accepted into Boost please specify any changes you would like to see for the library to be better in your estimation.
Dear Edward, dear Vladimir, dear list, I am excited to submit my first official review for Boost. I hope many reviews will follow. :-) In order to be free of any biases, I have not read the other reviews (yet). I'm sorry to repeat anything that has already been discussed. Edward Diener wrote:
Comments, questions, and reviews will all be welcome for the library. Please try to sum up your review by answering these questions:
Are you knowledgeable about the problem domain?
A bit. I'm sure I haven't done as much with conversion as Vladimir, but I do know a little bit about it.
What is your evaluation of the documentation?
The first sentence of the introduction explains the primary motivation for the library. That's a good thing. Unfortunately, that sentence is too long and complicated; it discourages further reading. I would recommend splitting it into several shorter sentences, for example like this: "The main purpose of the framework is to provide a single consistent interface to various conversion methods. A single interface with uniform behaviour helps to improve productivity and correctness in software development." There is too much mumbo-jumbo: "The API facade specifies and formalizes the contract between the user and actual conversion facilities", and so on and so forth. This may give some users the impression that the library is only intended for advanced, specialised users. I presume this was not intended. At the bottom of the getting started page: "By design this is boost::lexical_cast's only behavior. Fairly straightforward and comprehensible. Unfortunately, it makes quite a few legitimate process/program flows difficult and awkward to implement. That is the niche that Boost.Convert is trying to fill." To me this seems like downgrading the purpose of your library. First you set out to provide a universal, extensible interface for anything related to conversion. Now, at second thought, your intention is to just eliminate the cumbersome exception handling of boost::lexical_cast. This is not the best way to sell your library to somebody like me. At the top of the performance page: "The C++11 family of std::to_string(), std::stoi(), etc. were not tested due to unavailablility." I'm OK with this for now, but I would be annoyed if I found this in the docs of the final release. Maybe there's some serious availability issue that I'm unaware of, but it still looks *lazy*. I would suggest either fixing the issue or not mentioning it. Overall the documentation does explain the library very well. I understood everything at first reading and it's clear why the library works the way it does. The succession of the chapters is natural. I'm also relieved that the documentation isn't overly lengthy. I did not find the reference section particularly enlightening, it's just a slimmed down copy of the code with some unexplained annotations. The motivation section is good, it provides both a more detailed motivation and a convincing use case. I would however suggest renaming it to "rationale" and moving it more to the front of the doc.
What is your evaluation of the design?
I like how the conversion API is completely type-agnostic, so that source type and target type can vary independently and neither needs to be a string type. I find the syntax `convert<Target>::from(source)` a bit counter-intuitive. A syntax in which the source comes before the target would seem more natural to me, but I understand that might be harder to implement. I strongly dislike the need to create a converter object (the ubiquitous cnv in the docs) and the need to pass it (of all places!) to the convert::from static member function. Having read the examples and the performance page I understand the potential benefit of using a persistent converter object that may accept additional arguments, but maybe you can involve the converter object elsewhere in the boost::convert interface. So instead of boost::cstringstream_converter cnv; boost::convert<Target>::from(source, cnv); I would like to see something more like boost::cstringstream_converter cnv; boost::convert<Target>::with(cnv).from(source); I would like to add to this that "convert" here rather means "extract". Code should be as self-explanatory as possible. I do definitely like how convert::result (to be replaced by tr1::optional) defers the decision of how to handle errors until later. I also like the `if (!result)` idiom, I think this an elegant way to handle conversion errors. At a higher level, I think the division of the library in an API facade and an extensible collection of particular converters is sensible.
What is your evaluation of the implementation?
The header converter/base.hpp defines a few macros, I think the names of those macros should be prefixed with BOOST_. I didn't spot any other issues, but I didn't try very hard either. The code looks quite clean to me.
What is your evaluation of the potential usefulness of the library?
I think a library like this is justified for special use cases that require lots of conversions, like XML processing as mentioned in the motivation section of the doc. I don't think it's very fit for general use because of the slightly long-winded syntax. Users will often prefer other interfaces that work through a single, short call. I also don't think this idiom would be appropriate beyond type conversions, for example in encryption as suggested in the doc. Of course, encryption in a sense is a conversion, but most functions are ultimately conversions and I think people would agree that the idiom proposed here would be needlessly cumbersome for most of such "conversions". Still, I certainly think there are valid use cases and I also think the approach taken in this library is appropriate.
Did you try to use the library? With what compiler? Did you have any problems?
I successfully compiled the test program with Apple Clang 5.1 (based on 3.4). It ran successfully except for one error: test_convert.cpp(313): test 'double_eng == eng_expected' failed in function 'int main(int, const char **)' This stage of the review took longer than necessary, partly due to my incompetence (fatigue) and partly due to a lack of documentation/adoption of standard Boost practices as well as a somewhat unfortunate choice to give #included files a .cpp extension. I didn't try anything else.
How much effort did you put into your evaluation? A glance? A quick reading? In-depth study?
I spent roughly two and a half hours carefully reading the documentation and about half an hour quickly glancing over the code.
And finally:
Do you think the library should be accepted as a Boost library?
Yes, but it isn't ready. At least three things need to happen: 1) Make the doc more accessible to read, by making shorter sentences and using fewer fancy words, especially on the first page. 2) At least try very hard to think of a more natural interface. I think the most pressing issue is the current practice of converting "from" a converter object. 3) Further boostify the library: use prefixes for the macros, automate test building with a jamfile, add example programs. Hope this helps! -Julian
Julian, Thank you for your submission and comments. They are much appreciated. On 05/20/2014 08:20 AM, Julian Gonggrijp wrote:
...
What is your evaluation of the documentation?
The first sentence of the introduction explains the primary motivation for the library. That's a good thing. Unfortunately, that sentence is too long and complicated; it discourages further reading...
There is too much mumbo-jumbo: "The API facade specifies and formalizes the contract between the user and actual conversion facilities", and so on and so forth.
I understand. :-) I tried to squeeze what I saw as important to a few introductory sentences. Namely, the formalization how the user "interacts" with / deploys conversion facilities. thesaurus.com does not give me anything better to replace "formalization" with. :-)
... At the bottom of the getting started page: ... At the top of the performance page:
"The C++11 family of std::to_string(), std::stoi(), etc. were not tested due to unavailablility."
I'm OK with this for now, but I would be annoyed if I found this in the docs of the final release. Maybe there's some serious availability issue that I'm unaware of, but it still looks *lazy*. I would suggest either fixing the issue or not mentioning it. Thank you. If the submission is successful, I'll be re-working the implementation and the docs based on all the input received.
...
What is your evaluation of the design?
I like how the conversion API is completely type-agnostic, so that source type and target type can vary independently and neither needs to be a string type.
I find the syntax `convert<Target>::from(source)` a bit counter-intuitive. A syntax in which the source comes before the target would seem more natural to me, but I understand that might be harder to implement. For type deduction purposes the order has to be template
As for std::stoi(), then I am on Ubuntu 12.04 and gcc-4.6. Those facilities are not available to me that I, in fact, stated as "due to unavailability". ;-) optional<TypeOut> convert(TypeIn, Converter) If you could post a snippet how to achieve what you are suggesting, I'd love to study it and possibly incorporate.
I strongly dislike the need to create a converter object (the ubiquitous cnv in the docs) and the need to pass it (of all places!) to the convert::from static member function. Having read the examples and the performance page I understand the potential benefit of using a persistent converter object that may accept additional arguments, but maybe you can involve the converter object elsewhere in the boost::convert interface. So instead of
boost::cstringstream_converter cnv; boost::convert<Target>::from(source, cnv);
I would like to see something more like
boost::cstringstream_converter cnv; boost::convert<Target>::with(cnv).from(source);
I would like to add to this that "convert" here rather means "extract". Code should be as self-explanatory as possible.
The interface and function names are very subjective subjects and everyone has a (differing) opinion. Unfortunately, it's impossible to satisfy them all. As a few reviewers expressed their dislike of "from" I am currently leaning towards int v = boost::convert(str, cnv).value(); It's not as self-explanatory but more in line how lexical_cast does it. As for passing converter to a separate function call, I suspect it'll have performance penalties as the converter will need to be stored somewhere for later use. I had similar design during the first "convert" review -- performance was shocking.
I do definitely like how convert::result (to be replaced by tr1::optional) defers the decision of how to handle errors until later. I also like the `if (!result)` idiom, I think this an elegant way to handle conversion errors.
At a higher level, I think the division of the library in an API facade and an extensible collection of particular converters is sensible. I am really glad to hear that because IMO it's the most important design decision that I wanted evaluated and discussed.
...
What is your evaluation of the potential usefulness of the library?
I think a library like this is justified for special use cases that require lots of conversions, like XML processing as mentioned in the motivation section of the doc. I don't think it's very fit for general use because of the slightly long-winded syntax. Users will often prefer other interfaces that work through a single, short call. Understood. Here I feel that the community is not as passionate as I am about the advantages of uniform and *familiar* interface. :-( The problem is that to use that mentioned "short call" one has to know/remember it and how to use it. It's far from easy when one deals with many different libraries providing different interfaces. I also don't think this idiom would be appropriate beyond type conversions, for example in encryption as suggested in the doc. Of course, encryption in a sense is a conversion, but most functions are ultimately conversions and I think people would agree that the idiom proposed here would be needlessly cumbersome for most of such "conversions". I hear you. Although I admit I do not share your view. My example would be lexical_cast. Coding a type to be used with lexical_cast is a pain. However, when I come across lexical_cast call (in an unfamiliar code), I immediately know what's going on and what the behavior and side effects and so on. You do not get that with some "int v = foo.to_int();", do you?
... I successfully compiled the test program with Apple Clang 5.1 (based on 3.4). It ran successfully except for one error:
test_convert.cpp(313): test 'double_eng == eng_expected' failed in function 'int main(int, const char **)' I believe that's because different compiles and platforms have different locale representations. Say, Microsoft's "1.235e-002" is "1.235e-02" with gcc. I wonder what was "double_eng" in your test (printed on line 310)?
...
And finally:
Do you think the library should be accepted as a Boost library? Yes, but it isn't ready. At least three things need to happen:
1) Make the doc more accessible to read, by making shorter sentences and using fewer fancy words, especially on the first page. 2) At least try very hard to think of a more natural interface. I think the most pressing issue is the current practice of converting "from" a converter object. 3) Further boostify the library: use prefixes for the macros, automate test building with a jamfile, add example programs.
Thank you for your "conditional yes" vote. It's much appreciated... and more things on my TODO list. :-)
Vladimir Batov wrote:
I also don't think this idiom would be appropriate beyond type conversions, for example in encryption as suggested in the doc. Of course, encryption in a sense is a conversion, but most functions are ultimately conversions and I think people would agree that the idiom proposed here would be needlessly cumbersome for most of such "conversions". I hear you. Although I admit I do not share your view. My example would be lexical_cast. Coding a type to be used with lexical_cast is a pain. However, when I come across lexical_cast call (in an unfamiliar code), I immediately know what's going on and what the behavior and side effects and so on. You do not get that with some "int v = foo.to_int();", do you?
I probably wasn't clear about what I meant by "conversions" (quoted): I meant functions in general. Your framework is general enough to replace x = std::sqrt(y); with boost::sqrt_convert sqrtcnv; x = boost::convert<double>::from(y, sqrtcnv).value(); but nobody would find that a good idea. I think encryption is more like std::sqrt than like std::stoi in this regard. Your framework seems appropriate for true type conversions only.
... I successfully compiled the test program with Apple Clang 5.1 (based on 3.4). It ran successfully except for one error:
test_convert.cpp(313): test 'double_eng == eng_expected' failed in function 'int main(int, const char **)' I believe that's because different compiles and platforms have different locale representations. Say, Microsoft's "1.235e-002" is "1.235e-02" with gcc. I wonder what was "double_eng" in your test (printed on line 310)?
I'll happily run an adjusted test case for you if you provide the code. Cheers, Julian
On 05/20/2014 11:02 AM, Julian Gonggrijp wrote:
Vladimir Batov wrote:
I also don't think this idiom would be appropriate beyond type conversions, for example in encryption as suggested in the doc. Of course, encryption in a sense is a conversion, but most functions are ultimately conversions and I think people would agree that the idiom proposed here would be needlessly cumbersome for most of such "conversions". I hear you. Although I admit I do not share your view. My example would be lexical_cast. Coding a type to be used with lexical_cast is a pain. However, when I come across lexical_cast call (in an unfamiliar code), I immediately know what's going on and what the behavior and side effects and so on. You do not get that with some "int v = foo.to_int();", do you? I probably wasn't clear about what I meant by "conversions" (quoted): I meant functions in general. Your framework is general enough to replace
x = std::sqrt(y);
with
boost::sqrt_convert sqrtcnv; x = boost::convert<double>::from(y, sqrtcnv).value();
but nobody would find that a good idea.
Any idea/proposal taken too far becomes absurd. So, I am not suggesting we replace std::sqrt. On the other hand, I personally like d1 = boost::convert<double>(x, xcnv).value(); d2 = boost::convert<double>(y, ycnv).value(); d2 = boost::convert<double>(z, zcnv).value(); much better than some obscure d1 = x.to_double(); d2 = y.extract_double(param); d2 = z.simplify(); as boost::convert advertizes known/familiar/instantly recognizable behavior when the seemingly innocuous "to_double()" and others are dark horses that I need to spend time getting to know. When one multiplies that effort manifold, the advantages of using familiar interfaces become clear.
I think encryption is more like std::sqrt than like std::stoi in this regard. Your framework seems appropriate for true type conversions only.
The same advantage here. string encripted = encript(str); The above is better *if* and *after* you spent time getting to know what "encript" actually does, its shortcomings, etc. On large scale all that additional knowledge accumulates and expensive. The problem (as I see it) is that s/w developers often seem to focus on the small picture and lose the big one. It's not an insult but an observation and is the result of daily attention to details. When focus shifts to having to deal with lots and lots of (often unfamiliar) code, say, code reviews, maintenance, then ability to reliably understand the code as in d1 = boost::convert<double>(x, xcnv).value(); d2 = boost::convert<double>(y, ycnv).value(); d2 = boost::convert<double>(z, zcnv).value(); rather than *guess* what it might be doing as in d1 = x.to_double(); d2 = y.extract_double(param); d2 = z.simplify(); is important. All IMO of course.
... I successfully compiled the test program with Apple Clang 5.1 (based on 3.4). It ran successfully except for one error:
test_convert.cpp(313): test 'double_eng == eng_expected' failed in function 'int main(int, const char **)' I believe that's because different compiles and platforms have different locale representations. Say, Microsoft's "1.235e-002" is "1.235e-02" with gcc. I wonder what was "double_eng" in your test (printed on line 310)? I'll happily run an adjusted test case for you if you provide the code.
Right before the test failed, it must have printed with 310: printf("eng locale=%s, presentation=%s.\n", eng_locale.name().c_str(), double_eng.c_str()); To incorporate Clang I need to know what macro it defines (like __GNUC__ for gcc) and what was printed for "double_eng". Thanks, V.
Vladimir Batov wrote:
Your framework is general enough to replace
x = std::sqrt(y);
with
boost::sqrt_convert sqrtcnv; x = boost::convert<double>::from(y, sqrtcnv).value();
but nobody would find that a good idea.
[...]
I think encryption is more like std::sqrt than like std::stoi in this regard. Your framework seems appropriate for true type conversions only.
The same advantage here.
string encripted = encript(str);
The above is better *if* and *after* you spent time getting to know what "encript" actually does, its shortcomings, etc. On large scale all that additional knowledge accumulates and expensive.
I think I understand what you mean, but I fail to see why `encript` is a good example and `sqrt` isn't. Both convert one value into another, both have details that you need to know about and both can fail, right?
The problem (as I see it) is that s/w developers often seem to focus on the small picture and lose the big one. It's not an insult but an observation and is the result of daily attention to details. When focus shifts to having to deal with lots and lots of (often unfamiliar) code, say, code reviews, maintenance, then ability to reliably understand the code as in
d1 = boost::convert<double>(x, xcnv).value(); d2 = boost::convert<double>(y, ycnv).value(); d2 = boost::convert<double>(z, zcnv).value();
rather than *guess* what it might be doing as in
d1 = x.to_double(); d2 = y.extract_double(param); d2 = z.simplify();
is important. All IMO of course.
This looks like a convincing case to me (though I have to suppress xyz coordinate associations), but this appears to be a type conversion again.
Right before the test failed, it must have printed with
310: printf("eng locale=%s, presentation=%s.\n", eng_locale.name().c_str(), double_eng.c_str());
Oh right, now I see it. rus locale=ru_RU.UTF-8, presentation=1,235e-02. eng locale=, presentation=1,235e-02. That English locale with the decimal comma appears rather like the default locale, which is Dutch in my case. I can also give you the performance results on my machine, which has a mobile 2.5 GHz dual core i5 processor: str-to-int: strtol/scanf/lcast/sstream=0.73/2.01/1.32/4.58 seconds. int-to-str: ltostr/prntf/lcast/sstream=3.19/2.67/2.91/5.49 seconds. str-to-user-type: lcast/sstream=3.88/4.10 seconds. (I compiled it without any compiler flags except for the include paths.) Cheers, Julian
On 05/20/2014 04:59 PM, Julian Gonggrijp wrote:
Vladimir Batov wrote:
Your framework is general enough to replace
x = std::sqrt(y);
with
boost::sqrt_convert sqrtcnv; x = boost::convert<double>::from(y, sqrtcnv).value();
but nobody would find that a good idea. [...]
I think encryption is more like std::sqrt than like std::stoi in this regard. Your framework seems appropriate for true type conversions only. The same advantage here.
string encripted = encript(str);
The above is better *if* and *after* you spent time getting to know what "encript" actually does, its shortcomings, etc. On large scale all that additional knowledge accumulates and expensive. I think I understand what you mean, but I fail to see why `encript` is a good example and `sqrt` isn't. Both convert one value into another, both have details that you need to know about and both can fail, right?
You are correct, technically/mechanically *any* function taking one argument and returning one argument could potentially be called a "converter" and invoked via boost::convert(). The good thing IMO is that "convert" allows that *if* there is a need for that... and I *can* think of situations when I might deploy std::sqrt as such a converter. Having said that, ultimately, it's all about readability. So, given std::sqrt is the standard componet, I expect a s/w developer to be already familiar with it and, therefore, in most cases I expect it called directly. If "encrypt" is as known on my project (say, it's part of out foundation lib. collection) then I'll likely treat as std::sqrt. If that is some obscure func. call, I might prefer it called via established interface.
Right before the test failed, it must have printed with
310: printf("eng locale=%s, presentation=%s.\n", eng_locale.name().c_str(), double_eng.c_str()); Oh right, now I see it.
rus locale=ru_RU.UTF-8, presentation=1,235e-02. eng locale=, presentation=1,235e-02.
That English locale with the decimal comma appears rather like the default locale, which is Dutch in my case.
I can also give you the performance results on my machine, which has a mobile 2.5 GHz dual core i5 processor:
str-to-int: strtol/scanf/lcast/sstream=0.73/2.01/1.32/4.58 seconds. int-to-str: ltostr/prntf/lcast/sstream=3.19/2.67/2.91/5.49 seconds. str-to-user-type: lcast/sstream=3.88/4.10 seconds.
(I compiled it without any compiler flags except for the include paths.)
Thank you, I corrected the test (in the post_review branch) to explicitly specify English locale to avoid the error you mentioned. Thank you for the performance results. I have a suspicion you did not compile optimized and, if so, the results are somewhat misleading/skewed. For gcc it'd be -O3.
On May 19, 2014 9:36:40 PM EDT, Vladimir Batov
On 05/20/2014 11:02 AM, Julian Gonggrijp wrote:
Vladimir Batov wrote:
I probably wasn't clear about what I meant by "conversions" (quoted): I meant functions in general. Your framework is general enough to replace
x = std::sqrt(y);
with
boost::sqrt_convert sqrtcnv; x = boost::convert<double>::from(y, sqrtcnv).value();
but nobody would find that a good idea.
Any idea/proposal taken too far becomes absurd. So, I am not suggesting
we replace std::sqrt. On the other hand, I personally like
d1 = boost::convert<double>(x, xcnv).value(); d2 = boost::convert<double>(y, ycnv).value(); d2 = boost::convert<double>(z, zcnv).value();
much better than some obscure
d1 = x.to_double(); d2 = y.extract_double(param); d2 = z.simplify();
as boost::convert advertizes known/familiar/instantly recognizable behavior when the seemingly innocuous "to_double()" and others are dark horses that I need to spend time getting to know. When one multiplies that effort manifold, the advantages of using familiar interfaces become clear.
Your version is better only in the sense that it provides a common interface, of I understand your point correctly. however, the resulting behavior is not clearly implied by those calls because one must know what the converters actually do in each case. (This is exemplified by the rest compilation error you resolved due to not explicitly specifying the locale.) Now, switch to a free function and compare it to your proposal and we can make some progress. For the case with to_double(), it might look like this: d1 = to_double(x); That interface can be as standard as yours, and is perfectly understandable. They are different ways of spelling a conversion operation. That mine is more succinct makes it superior in many ways. As it is, it lacks two things yours offers, however: extensibility and configurability. Extensibility can be added by making it a function template like so: d1 = to<double>(x); Configurability can be added with an overload: d1 = to<double>(x, cnv); Did I miss something? At a glance, this offers everything your more verbose syntax offers.
I think encryption is more like std::sqrt than like std::stoi in this regard. Your framework seems appropriate for true type conversions only.
+1
The same advantage here.
string encripted = encript(str);
The above is better *if* and *after* you spent time getting to know what "encript" actually does, its shortcomings, etc. On large scale all that additional knowledge accumulates and expensive.
The same can be said about the encrypting converter one would use with your framework to effect encryption. What you're suggesting is that your spelling somehow clarifies that an encrypting conversion is occurring, and how it happens, and I don't buy it.
The problem (as I see it) is that s/w developers often seem to focus on the small picture and lose the big one. It's not an insult but an observation and is the result of daily attention to details. When focus shifts to having to deal with lots and lots of (often unfamiliar) code, say, code reviews, maintenance, then ability to reliably understand the code as in
d1 = boost::convert<double>(x, xcnv).value(); d2 = boost::convert<double>(y, ycnv).value(); d2 = boost::convert<double>(z, zcnv).value();
rather than *guess* what it might be doing as in
d1 = x.to_double(); d2 = y.extract_double(param); d2 = z.simplify();
is important. All IMO of course.
You merely shifted the knowledge burden to three converter classes from three member functions. I'll grant that once you understand those the converters, you'll understand their application in multiple contexts, so that advantage does partially justify your claim. ___ Rob (Sent from my portable computation engine)
On 05/25/2014 07:13 PM, Rob Stewart wrote:
On May 19, 2014 9:36:40 PM EDT, Vladimir Batov
wrote: ... I personally like
d1 = boost::convert<double>(x, xcnv).value();
much better than some obscure
d1 = x.to_double(); d2 = y.extract_double(param); d2 = z.simplify();
as boost::convert advertizes known/familiar/instantly recognizable behavior when the seemingly innocuous "to_double()" and others are dark horses that I need to spend time getting to know. When one multiplies that effort manifold, the advantages of using familiar interfaces become clear. Your version is better only in the sense that it provides a common interface, of I understand your point correctly. however, the resulting behavior is not clearly implied by those calls because one must know what the converters actually do in each case. (This is exemplified by the rest compilation error you resolved due to not explicitly specifying the locale.)
Now, switch to a free function and compare it to your proposal and we can make some progress. For the case with to_double(), it might look like this:
d1 = to_double(x);
That interface can be as standard as yours, and is perfectly understandable. They are different ways of spelling a conversion operation. That mine is more succinct makes it superior in many ways. As it is, it lacks two things yours offers, however: extensibility and configurability.
Extensibility can be added by making it a function template like so:
d1 = to<double>(x);
Configurability can be added with an overload:
d1 = to<double>(x, cnv);
Did I miss something? At a glance, this offers everything your more verbose syntax offers.
Rob, the only thing missing in your version is the handling of the conversion-failure case in a non-throwing manner. If we add this to your example, then it becomes optional<double> d1 = to<double>(x, cnv); It seems identical to what we've come up so far for the "convert" user API: optional<double> d1 = boost::convert<double>(x, cnv); If you and others feel that "to" reflects the purpose better than "convert", I am fine with it... I think Jeroen used "as" in his "coerce": optional<double> d1 = boost::convert<double>(x, cnv); optional<double> d1 = boost::to<double>(x, cnv); optional<double> d1 = boost::as<double>(x, cnv); ...??? I'll adapt what the community feels reflects the purpose/intention better.
The same advantage here.
string encripted = encript(str);
The above is better *if* and *after* you spent time getting to know what "encript" actually does, its shortcomings, etc. On large scale all that additional knowledge accumulates and expensive. The same can be said about the encrypting converter one would use with your framework to effect encryption. What you're suggesting is that your spelling somehow clarifies that an encrypting conversion is occurring, and how it happens, and I don't buy it.
I am certainly not suggesting that my spelling is in any way superior. What I was trying to say is that a well-known familiar interface sends a clearer message compared to home-made obscure one. I feel that, when I see lexical_cast<string>(x); x.str(); #1 tells me more than #2.
The problem (as I see it) is that s/w developers often seem to focus on the small picture and lose the big one. It's not an insult but an observation and is the result of daily attention to details. When focus shifts to having to deal with lots and lots of (often unfamiliar) code, say, code reviews, maintenance, then ability to reliably understand the code as in
d1 = boost::convert<double>(x, xcnv).value(); d2 = boost::convert<double>(y, ycnv).value(); d2 = boost::convert<double>(z, zcnv).value();
rather than *guess* what it might be doing as in
d1 = x.to_double(); d2 = y.extract_double(param); d2 = z.simplify();
is important. All IMO of course. You merely shifted the knowledge burden to three converter classes from three member functions. I'll grant that once you understand those the converters, you'll understand their application in multiple contexts, so that advantage does partially justify your claim.
I do see your point. I truly do. Still, I do not believe the issue is black or white, i.e. for you for be correct I have to be wrong. Please let me try again and explain my point. For starters, let's distance ourselves from "convert" and focus on the usability issue itself. Let's take a well-known instantly recognizable API -- lexical_cast -- and compare #1 string s1 = lexical_cast<string>(x); string s2 = lexical_cast<string>(y); string s3 = lexical_cast<string>(z); with, say, #2 string s1 = x.str(); string s2 = y.stringinize(); string s3 = z.to_string(); Here, by paraphrasing your statement lexical_cast "merely shifts the knowledge burden" to std::istream& operator>>(std::istream&, TypeX&); std::ostream& operator<<(std::ostream&, TypeX const&); std::istream& operator>>(std::istream&, TypeY&); std::ostream& operator<<(std::ostream&, TypeY const&); std::istream& operator>>(std::istream&, TypeZ&); std::ostream& operator<<(std::ostream&, TypeZ const&); and I myself wrote quite a number of these and I hate writing them... Still, I was doing that because that one-time pain was countered IMO by many-times usage/deployment gain. I feel that uniform lexical_cast-based deployment sends a far-clearer message about intentions and purpose... not to mention advertises well-known user-facing interface and behavior. For me, reading #1 flows (even though as you pointed out it does not convey all information). I feel stumbling on every line while reading #2 because mentally I have to actually read it (usually people do not read but recognize patterns, i.e. I do not read l-e-x-i-c-a-l but gulp it as one-unit pattern... if that's a familiar one... that's, for example, why it's quite hard spotting typos); then I have to stop and mentally decipher what "str" might actually mean; then I have to apply my interpretation to the context to see if it matches. Hmm, just re-read the paragraph above... looks awfully boring. :-) Still, it is probably the best I can do within the confinement of a post. Apologies if I failed again. :-) Another probably easier-to-grasp advantage of #1 over #2 is the orthogonality of provided conversion functionality and flexibility/adaptability of optional<double> d1 = boost::convert<double>(x, cnv); optional<double> d1 = boost::to<double>(x, cnv); compared to #2.
On May 25, 2014 7:01:21 PM EDT, Vladimir Batov
On May 19, 2014 9:36:40 PM EDT, Vladimir Batov
wrote: ... I personally like
d1 = boost::convert<double>(x, xcnv).value();
much better than some obscure
d1 = x.to_double(); d2 = y.extract_double(param); d2 = z.simplify();
as boost::convert advertizes known/familiar/instantly recognizable behavior when the seemingly innocuous "to_double()" and others are dark horses that I need to spend time getting to know. When one multiplies that effort manifold, the advantages of using familiar interfaces become clear. Your version is better only in the sense that it provides a common interface, of I understand your point correctly. however, the resulting behavior is not clearly implied by those calls because one must know what the converters actually do in each case. (This is exemplified by
On 05/25/2014 07:13 PM, Rob Stewart wrote: the rest compilation error you resolved due to not explicitly specifying the locale.)
Now, switch to a free function and compare it to your proposal and we can make some progress. For the case with to_double(), it might look like this:
d1 = to_double(x);
That interface can be as standard as yours, and is perfectly understandable. They are different ways of spelling a conversion operation. That mine is more succinct makes it superior in many ways. As it is, it lacks two things yours offers, however: extensibility and configurability.
Extensibility can be added by making it a function template like so:
d1 = to<double>(x);
Configurability can be added with an overload:
d1 = to<double>(x, cnv);
Did I miss something? At a glance, this offers everything your more verbose syntax offers.
Rob, the only thing missing in your version is the handling of the conversion-failure case in a non-throwing manner.
My examples followed yours.
If we add this to your example, then it becomes
optional<double> d1 = to<double>(x, cnv);
It seems identical to what we've come up so far for the "convert" user API:
optional<double> d1 = boost::convert<double>(x, cnv);
Without the directional cue that "from" provided, "convert" loses clarity. "to" provides directionality.
If you and others feel that "to" reflects the purpose better than "convert", I am fine with it... I think Jeroen used "as" in his "coerce":
"as" works equally well for me.
optional<double> d1 = boost::convert<double>(x, cnv); optional<double> d1 = boost::to<double>(x, cnv); optional<double> d1 = boost::as<double>(x, cnv); ...???
I'll adapt what the community feels reflects the purpose/intention better.
The same advantage here.
string encripted = encript(str);
The above is better *if* and *after* you spent time getting to know what "encript" actually does, its shortcomings, etc. On large scale all that additional knowledge accumulates and expensive. The same can be said about the encrypting converter one would use with your framework to effect encryption. What you're suggesting is that your spelling somehow clarifies that an encrypting conversion is occurring, and how it happens, and I don't buy it.
I am certainly not suggesting that my spelling is in any way superior. What I was trying to say is that a well-known familiar interface sends a clearer message compared to home-made obscure one. I feel that, when I see
lexical_cast<string>(x); x.str();
#1 tells me more than #2.
Perhaps. It tells you more about the mechanism, I'll grant.
The problem (as I see it) is that s/w developers often seem to focus on the small picture and lose the big one. It's not an insult but an observation and is the result of daily attention to details. When focus shifts to having to deal with lots and lots of (often unfamiliar) code, say, code reviews, maintenance, then ability to reliably understand the code as in
d1 = boost::convert<double>(x, xcnv).value(); d2 = boost::convert<double>(y, ycnv).value(); d2 = boost::convert<double>(z, zcnv).value();
rather than *guess* what it might be doing as in
d1 = x.to_double(); d2 = y.extract_double(param); d2 = z.simplify();
is important. All IMO of course. You merely shifted the knowledge burden to three converter classes from three member functions. I'll grant that once you understand those the converters, you'll understand their application in multiple contexts, so that advantage does partially justify your claim.
I do see your point. I truly do.
Not quite, I think.
Still, I do not believe the issue is black or white, i.e. for you for be correct I have to be wrong. Please let me try again and explain my point. For starters, let's distance ourselves from "convert" and focus on the usability issue itself. Let's
take a well-known instantly recognizable API -- lexical_cast -- and compare
#1 string s1 = lexical_cast<string>(x); string s2 = lexical_cast<string>(y); string s3 = lexical_cast<string>(z);
with, say,
#2 string s1 = x.str(); string s2 = y.stringinize(); string s3 = z.to_string();
Here, by paraphrasing your statement lexical_cast "merely shifts the knowledge burden" to
std::istream& operator>>(std::istream&, TypeX&); std::ostream& operator<<(std::ostream&, TypeX const&); std::istream& operator>>(std::istream&, TypeY&); std::ostream& operator<<(std::ostream&, TypeY const&); std::istream& operator>>(std::istream&, TypeZ&); std::ostream& operator<<(std::ostream&, TypeZ const&);
and I myself wrote quite a number of these and I hate writing them... Still, I was doing that because that one-time pain was countered IMO by many-times usage/deployment gain. I feel that uniform lexical_cast-based deployment sends a far-clearer message about intentions and purpose... not to mention advertises well-known user-facing interface and behavior. For me, reading #1 flows (even though as you pointed out it does not convey all information). I feel stumbling on every line while reading #2 because mentally I have to actually read it (usually people do not read but recognize patterns, i.e. I do not read l-e-x-i-c-a-l but gulp it as one-unit pattern... if that's a familiar one... that's, for example, why it's quite hard spotting typos); then I have to stop and mentally decipher what "str" might actually mean; then I have to apply my interpretation to the context to see if it matches.
I value consistency of interface. The difference between what you've proposed and what lexical_cast offers is that yours defers to an arbitrary converter while lexical_cast always uses IOStreams, though that does defer to type specific insertion and extraction operators. The operators, however, are type specific, and your converters are orthogonal to the types (unless using an IOStreams-based converter).
Another probably easier-to-grasp advantage of #1 over #2 is the orthogonality of provided conversion functionality and flexibility/adaptability of
optional<double> d1 = boost::convert<double>(x, cnv); optional<double> d1 = boost::to<double>(x, cnv);
compared to #2.
Using a standard interface over whimsical and varying type specific interfaces is helpful. I'm just saying that one must understand the behavior of a potentially large number of converters. The standard interface doesn't help to understand how the conversion is done. (I'll submit a review tomorrow.) ___ Rob (Sent from my portable computation engine)
On 5/25/2014 10:04 PM, Rob Stewart wrote:
On May 25, 2014 7:01:21 PM EDT, Vladimir Batov
wrote: On May 19, 2014 9:36:40 PM EDT, Vladimir Batov
wrote: ... I personally like
d1 = boost::convert<double>(x, xcnv).value();
much better than some obscure
d1 = x.to_double(); d2 = y.extract_double(param); d2 = z.simplify();
as boost::convert advertizes known/familiar/instantly recognizable behavior when the seemingly innocuous "to_double()" and others are dark horses that I need to spend time getting to know. When one multiplies that effort manifold, the advantages of using familiar interfaces become clear. Your version is better only in the sense that it provides a common interface, of I understand your point correctly. however, the resulting behavior is not clearly implied by those calls because one must know what the converters actually do in each case. (This is exemplified by
On 05/25/2014 07:13 PM, Rob Stewart wrote: the rest compilation error you resolved due to not explicitly specifying the locale.)
Now, switch to a free function and compare it to your proposal and we can make some progress. For the case with to_double(), it might look like this:
d1 = to_double(x);
That interface can be as standard as yours, and is perfectly understandable. They are different ways of spelling a conversion operation. That mine is more succinct makes it superior in many ways. As it is, it lacks two things yours offers, however: extensibility and configurability.
Extensibility can be added by making it a function template like so:
d1 = to<double>(x);
Configurability can be added with an overload:
d1 = to<double>(x, cnv);
Did I miss something? At a glance, this offers everything your more verbose syntax offers.
Rob, the only thing missing in your version is the handling of the conversion-failure case in a non-throwing manner.
My examples followed yours.
If we add this to your example, then it becomes
optional<double> d1 = to<double>(x, cnv);
It seems identical to what we've come up so far for the "convert" user API:
optional<double> d1 = boost::convert<double>(x, cnv);
Without the directional cue that "from" provided, "convert" loses clarity. "to" provides directionality.
If you and others feel that "to" reflects the purpose better than "convert", I am fine with it... I think Jeroen used "as" in his "coerce":
"as" works equally well for me.
optional<double> d1 = boost::convert<double>(x, cnv); optional<double> d1 = boost::to<double>(x, cnv); optional<double> d1 = boost::as<double>(x, cnv); ...???
I'll adapt what the community feels reflects the purpose/intention better.
The same advantage here.
string encripted = encript(str);
The above is better *if* and *after* you spent time getting to know what "encript" actually does, its shortcomings, etc. On large scale all that additional knowledge accumulates and expensive. The same can be said about the encrypting converter one would use with your framework to effect encryption. What you're suggesting is that your spelling somehow clarifies that an encrypting conversion is occurring, and how it happens, and I don't buy it.
I am certainly not suggesting that my spelling is in any way superior. What I was trying to say is that a well-known familiar interface sends a clearer message compared to home-made obscure one. I feel that, when I see
lexical_cast<string>(x); x.str();
#1 tells me more than #2.
Perhaps. It tells you more about the mechanism, I'll grant.
The problem (as I see it) is that s/w developers often seem to focus on the small picture and lose the big one. It's not an insult but an observation and is the result of daily attention to details. When focus shifts to having to deal with lots and lots of (often unfamiliar) code, say, code reviews, maintenance, then ability to reliably understand the code as in
d1 = boost::convert<double>(x, xcnv).value(); d2 = boost::convert<double>(y, ycnv).value(); d2 = boost::convert<double>(z, zcnv).value();
rather than *guess* what it might be doing as in
d1 = x.to_double(); d2 = y.extract_double(param); d2 = z.simplify();
is important. All IMO of course. You merely shifted the knowledge burden to three converter classes from three member functions. I'll grant that once you understand those the converters, you'll understand their application in multiple contexts, so that advantage does partially justify your claim.
I do see your point. I truly do.
Not quite, I think.
Still, I do not believe the issue is black or white, i.e. for you for be correct I have to be wrong. Please let me try again and explain my point. For starters, let's distance ourselves from "convert" and focus on the usability issue itself. Let's
take a well-known instantly recognizable API -- lexical_cast -- and compare
#1 string s1 = lexical_cast<string>(x); string s2 = lexical_cast<string>(y); string s3 = lexical_cast<string>(z);
with, say,
#2 string s1 = x.str(); string s2 = y.stringinize(); string s3 = z.to_string();
Here, by paraphrasing your statement lexical_cast "merely shifts the knowledge burden" to
std::istream& operator>>(std::istream&, TypeX&); std::ostream& operator<<(std::ostream&, TypeX const&); std::istream& operator>>(std::istream&, TypeY&); std::ostream& operator<<(std::ostream&, TypeY const&); std::istream& operator>>(std::istream&, TypeZ&); std::ostream& operator<<(std::ostream&, TypeZ const&);
and I myself wrote quite a number of these and I hate writing them... Still, I was doing that because that one-time pain was countered IMO by many-times usage/deployment gain. I feel that uniform lexical_cast-based deployment sends a far-clearer message about intentions and purpose... not to mention advertises well-known user-facing interface and behavior. For me, reading #1 flows (even though as you pointed out it does not convey all information). I feel stumbling on every line while reading #2 because mentally I have to actually read it (usually people do not read but recognize patterns, i.e. I do not read l-e-x-i-c-a-l but gulp it as one-unit pattern... if that's a familiar one... that's, for example, why it's quite hard spotting typos); then I have to stop and mentally decipher what "str" might actually mean; then I have to apply my interpretation to the context to see if it matches.
I value consistency of interface. The difference between what you've proposed and what lexical_cast offers is that yours defers to an arbitrary converter while lexical_cast always uses IOStreams, though that does defer to type specific insertion and extraction operators. The operators, however, are type specific, and your converters are orthogonal to the types (unless using an IOStreams-based converter).
Another probably easier-to-grasp advantage of #1 over #2 is the orthogonality of provided conversion functionality and flexibility/adaptability of
optional<double> d1 = boost::convert<double>(x, cnv); optional<double> d1 = boost::to<double>(x, cnv);
compared to #2.
Using a standard interface over whimsical and varying type specific interfaces is helpful. I'm just saying that one must understand the behavior of a potentially large number of converters. The standard interface doesn't help to understand how the conversion is done.
(I'll submit a review tomorrow.)
Tomorrow is already beyond the final date for the review. However I can wait one more day for your review since you have specified that you will be doing a review.
participants (4)
-
Edward Diener
-
Julian Gonggrijp
-
Rob Stewart
-
Vladimir Batov