Math tools polynomial enhancements
I was just looking at the polynomial class and thinking that it could be enhanced substantially with more operators. Presumably with division and remainder it would be a Euclidean ring and then you could even call gcd on it. I have a draft implementation ready but I wanted to check that there is interest from the maintainers. Cheers. Jeremy
On 27/10/2015 21:49, Jeremy Murphy wrote:
I was just looking at the polynomial class and thinking that it could be enhanced substantially with more operators. Presumably with division and remainder it would be a Euclidean ring and then you could even call gcd on it.
This is something of an open question. I think there is a place for polynomial manipulation within Boost, but I'm not sure that this class is the best basis. As we say in the docs, it's a braindead implementation that's good enough for what I needed at the time to implement Boost.Math, but not really suitable for heavy duty polynomial manipulation. Division is interesting because it's not actually clear to me what the result should be - is it a polynomial (plus remainder) or is it a rational function (suitable reduced by the greatest common divisor). I think it probably needs to be written by someone who has a concrete use case and is deeply familiar with the theory, I don't know if that's you, but I do know it's not me ;) Best, John.
I have a draft implementation ready but I wanted to check that there is interest from the maintainers. Cheers. Jeremy
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Hi John,
On 29 October 2015 at 21:22, John Maddock
On 27/10/2015 21:49, Jeremy Murphy wrote:
I was just looking at the polynomial class and thinking that it could be enhanced substantially with more operators. Presumably with division and remainder it would be a Euclidean ring and then you could even call gcd on it.
This is something of an open question.
I think there is a place for polynomial manipulation within Boost, but I'm not sure that this class is the best basis. As we say in the docs, it's a braindead implementation that's good enough for what I needed at the time to implement Boost.Math, but not really suitable for heavy duty polynomial manipulation.
Well, I think "braindead" is a little harsh, it seems quite reasonable to me for a general-purpose univariate polynomial class. I must admit though, I don't particularly like the choice of constructors and the degree member function will return the maximum value of size_t when size == 0. What do you think of the idea then of making this polynomial class a proof-of-concept in terms of functionality? In the future the class could be given a heavy duty redesign, without, I hope, having to reimplement too much. But even if the redesign was never done, we would still have this class, with added functionality. Division is interesting because it's not actually clear to me what the
result should be - is it a polynomial (plus remainder) or is it a rational function (suitable reduced by the greatest common divisor).
Yes, I was initially troubled by this question but resolved, admittedly more through intuition than proof, that polynomial division is Euclidean (integer) division: the / operator gives you the quotient, and % gives you the remainder. Someone with a deeper understanding of abstract algebra could presumably validate or discredit this claim. However, if one accepts this, then everything falls neatly into place, for example the /= operator makes sense, which it obviously wouldn't otherwise. I think it probably needs to be written by someone who has a concrete use
case and is deeply familiar with the theory, I don't know if that's you, but I do know it's not me ;)
I admit that I am mostly drawn to this problem by fascination and wonder rather than a pragmatic need to get something done. But I think you're right, it's important to have a concrete use case rather than just throwing operators at a class to see what sticks. So GCD is the use case I propose, starting with the Euclidean algorithm and then the Stein algorithm. I'm not an expert in this area but I have a pretty good idea of what needs to be done. Cheers. Jeremy
On Fri, 30 Oct 2015, Jeremy Murphy wrote:
Division is interesting because it's not actually clear to me what the result should be - is it a polynomial (plus remainder) or is it a rational function (suitable reduced by the greatest common divisor).
Yes, I was initially troubled by this question but resolved, admittedly more through intuition than proof, that polynomial division is Euclidean (integer) division: the / operator gives you the quotient, and % gives you the remainder. Someone with a deeper understanding of abstract algebra could presumably validate or discredit this claim. However, if one accepts this, then everything falls neatly into place, for example the /= operator makes sense, which it obviously wouldn't otherwise.
This looks like a sensible choice. The situation is pretty similar to integers. 10 / 4 could return a rational type, but the choice was made to stay in the original type and use the Euclidean domain structure instead. You might want to provide a div-like function for people who want both the quotient and the remainder without duplicating too much computation. -- Marc Glisse
On 30/10/2015 2:24 AM, "Marc Glisse"
On Fri, 30 Oct 2015, Jeremy Murphy wrote:
Division is interesting because it's not actually clear to me what the result should be - is it a polynomial (plus remainder) or is it a
rational
function (suitable reduced by the greatest common divisor).
Yes, I was initially troubled by this question but resolved, admittedly more through intuition than proof, that polynomial division is Euclidean (integer) division: the / operator gives you the quotient, and % gives you the remainder. Someone with a deeper understanding of abstract algebra could presumably validate or discredit this claim. However, if one accepts this, then everything falls neatly into place, for example the /= operator makes sense, which it obviously wouldn't otherwise.
This looks like a sensible choice. The situation is pretty similar to integers. 10 / 4 could return a rational type, but the choice was made to stay in the original type and use the Euclidean domain structure instead.
Thanks. Out of curiosity, which choice are you referring to? I presume it must be early in computing history.
You might want to provide a div-like function for people who want both
the quotient and the remainder without duplicating too much computation. Yes, that's exactly what I've done. Cheers.
-- Marc Glisse
_______________________________________________ Unsubscribe & other changes:
Well, I think "braindead" is a little harsh, it seems quite reasonable to me for a general-purpose univariate polynomial class. I must admit though, I don't particularly like the choice of constructors In C++11 land, constexpr construction from an initializer list would be a good choice.
and the degree member function will return the maximum value of size_t when size == 0. That's a bug. See, told you it was braindead ;)
Yes, I was initially troubled by this question but resolved, admittedly more through intuition than proof, that polynomial division is Euclidean (integer) division: the / operator gives you the quotient, and % gives you the remainder. Someone with a deeper understanding of abstract algebra could presumably validate or discredit this claim. However, if one accepts this, then everything falls neatly into place, for example the /= operator makes sense, which it obviously wouldn't otherwise. Looking quickly at what other polynomial libraries do, it seems this could indeed work.
I think it probably needs to be written by someone who has a concrete use
case and is deeply familiar with the theory, I don't know if that's you, but I do know it's not me ;)
I admit that I am mostly drawn to this problem by fascination and wonder rather than a pragmatic need to get something done. But I think you're right, it's important to have a concrete use case rather than just throwing operators at a class to see what sticks. So GCD is the use case I propose, starting with the Euclidean algorithm and then the Stein algorithm. I'm not an expert in this area but I have a pretty good idea of what needs to be done. OK, lets give it a shot and see where it leads.
Best, John.
On 30 October 2015 at 05:33, John Maddock
OK, lets give it a shot and see where it leads.
At the moment it has led me to the question of how to disable division when the coefficients are not a field? And, ideally, provide a useful compiler error message along the way. I don't expect to actually be able to determine which types are fields (that kind of concept-checking is still a dream) so I'm thinking that I'll just disable it for integral types and rely on documentation for the rest. Cheers. Jeremy
On 01/11/2015 00:57, Jeremy Murphy wrote:
On 30 October 2015 at 05:33, John Maddock
wrote: OK, lets give it a shot and see where it leads.
At the moment it has led me to the question of how to disable division when the coefficients are not a field? And, ideally, provide a useful compiler error message along the way.
I don't expect to actually be able to determine which types are fields (that kind of concept-checking is still a dream) so I'm thinking that I'll just disable it for integral types and rely on documentation for the rest.
We have enough traits to get amazingly close to concepts, however, in this case it's the semantics of say operator / which determine "ring-ness", so yes by all means check for integer types (you may need both is_integral and numeric_limits<>::is_integer to catch them all). John.
Btw, for anyone else interested, the PR for run-time polynomial division is
open: https://github.com/boostorg/math/pull/17
I am a bit conflicted about the question of assert vs exception so wouldn't
mind hearing more points of view.
On 1 November 2015 at 19:57, John Maddock
On 01/11/2015 00:57, Jeremy Murphy wrote:
On 30 October 2015 at 05:33, John Maddock
wrote: OK, lets give it a shot and see where it leads.
At the moment it has led me to the question of how to disable division
when the coefficients are not a field? And, ideally, provide a useful compiler error message along the way.
I don't expect to actually be able to determine which types are fields (that kind of concept-checking is still a dream) so I'm thinking that I'll just disable it for integral types and rely on documentation for the rest.
We have enough traits to get amazingly close to concepts, however, in this case it's the semantics of say operator / which determine "ring-ness", so yes by all means check for integer types (you may need both is_integral and numeric_limits<>::is_integer to catch them all).
John.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On 02/11/2015 12:52, Jeremy Murphy wrote:
Btw, for anyone else interested, the PR for run-time polynomial division is open: https://github.com/boostorg/math/pull/17
I am a bit conflicted about the question of assert vs exception so wouldn't mind hearing more points of view.
I think the issue is that this is a borderline case. Generally speaking I prefer exceptions plus long-and-tedious error messages, because it reduces the number of "why does your silly code crash" bug reports. Asserts I restrict to invariants in my code. However, any interface that's designed to be used in a tight inner loop should use asserts instead (plus a really big warning in the docs!). Something like vector subscripting is a good example. I'm not sure which this is.... although any usage inside a loop would likely be trivial compared to any mutating operations on the polynomial? John.
On 10/29/15 3:22 AM, John Maddock wrote:
I think there is a place for polynomial manipulation within Boost, but I'm not sure that this class is the best basis. As we say in the docs, it's a braindead implementation that's good enough for what I needed at the time to implement Boost.Math, but not really suitable for heavy duty polynomial manipulation.
While you're at it, How about a constexpr/tmp version which would do polynomial manipulation at compile time?
I think it probably needs to be written by someone who has a concrete use case and is deeply familiar with the theory, I don't know if that's you, but I do know it's not me ;)
LOL - if not you, then who? Robert Ramey
On 29/10/2015 15:04, Robert Ramey wrote:
On 10/29/15 3:22 AM, John Maddock wrote:
I think there is a place for polynomial manipulation within Boost, but I'm not sure that this class is the best basis. As we say in the docs, it's a braindead implementation that's good enough for what I needed at the time to implement Boost.Math, but not really suitable for heavy duty polynomial manipulation.
While you're at it, How about a constexpr/tmp version which would do polynomial manipulation at compile time?
Interesting idea, what kind of manipulation did you have in mind - arithmetic? Might be possible in C++14 but certainly not in C++11. Probably a whole different library actually, since the order would have to be a compile time parameter? John.
On 10/29/15 11:35 AM, John Maddock wrote:
On 29/10/2015 15:04, Robert Ramey wrote:
On 10/29/15 3:22 AM, John Maddock wrote:
I think there is a place for polynomial manipulation within Boost, but I'm not sure that this class is the best basis. As we say in the docs, it's a braindead implementation that's good enough for what I needed at the time to implement Boost.Math, but not really suitable for heavy duty polynomial manipulation.
While you're at it, How about a constexpr/tmp version which would do polynomial manipulation at compile time?
Interesting idea, what kind of manipulation did you have in mind - arithmetic? Might be possible in C++14 but certainly not in C++11. Probably a whole different library actually,
Right since the order would have
to be a compile time parameter?
I'm thinking it would be a variadic tuple.
here's what I had in mind:
// define a polynomial with coefficients of type T (int, float, etc)
template<typename T>
using polynomial = std::tuple
I'm thinking it would be a variadic tuple.
here's what I had in mind:
// define a polynomial with coefficients of type T (int, float, etc)
template<typename T> using polynomial = std::tuple
// now define some operators on these polynomials
template<typename T>j // polynomial addition - easy to implement operator+(polynomial<T>lhs, polynomial<T>rhs); // other ops ... easy to implement // polynomial division - aaaa - more compilicated to implement operator/(polynomial<T>lhs, polynomial<T>rhs);
These are all good ideas, however since polynomial above *is* a tuple,
you have just added arithmetic operators for all tuples, which is not so
good :(
I suspect the explosion of template instances for any non-trivial uses
would also explode compile times?
Of course C++17 may (or not) fix both of the above.
A simpler alternative might be:
template
// very cool functions template<typename T> constexpr polynomial<T> derivative(polynomial<T> p);
template<typename T> constexpr polynomial<T> integrate(polynomial<T> p);
// and of course we could evaluate any polynomial at // specific point at either compile and/or run time constexpr T evaluate(const polynomial<T> & p);
// a taylor series has the last two terms reserved for a rational // representation of an error term template<typename T> using taylor_series = polynomial<T>;
// evaluate T value(taylor_series<T>, T x);
// get error value T error(taylor_series<T>, T x);
// given a sequence of derivatives of F at point x // calculate a taylor series template
constexpr taylor_series::taylor_series(x = 0); // now I can replace my clunky quadrature of function // F with
// return value and error template
const expr std::pair fast_integral( const T & start, const T & finish ){ using f = integrate(taylor_series<F>(3.233)); T value = evaluate(f, finish) - evaluate(f, start); T error_value = abs(error(f, finish)) + abs(error(f, start)); return std::pair }; This would be pretty useful as it stands. Of course it brings to mind a few more ideas
a) T above is presumed to be a scalar variable - but it doesn't have to be. What I really need is to permit T to be a tuble itself so I could handle complex and and n dimensional space situations.
b) The creating of F is problematic and tedious. For this we need TMP expression template driven automatic differentiation.
I'm thinking you could put this together in a couple of days
Robert Ramey
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On 10/30/15 10:53 AM, John Maddock wrote:
I suspect the explosion of template instances for any non-trivial uses would also explode compile times?
I confess that I never worry about this. I see this concern raised all the time but I never notice it in my own work. One thing I do to is avoid the "convenience" headers and include only the headers I actually know I need so maybe that helps. Also for the serialization library I use make a library component - maybe that speeds up the build of all the applications. also CMake and B2 only recompile changed stuff, so that seems to help also. Anyway - for me this doesn't ever seem a real problem - especially these days.
Of course C++17 may (or not) fix both of the above.
A simpler alternative might be:
template
class polynomial; Which implements a polynomial of maximum order N. Multiplication would have to be mod x^N as I can't think of an error handling mechanism that works for both runtime and constexpr contexts?
Well, I'm not seeing any of this. If I multiply a polynomial of order N and one of order M, I'm going to get a resultant polynomial of order M + N, am I not? Similarly with division I'll get one of M - N (ignoring the admittedly sticky question of the remainder. And what kind of error could mulitplication of polynomials give? I must say I'm quite intrigued with this for C++ 14. I can't justify spending any time on it though. Robert Ramey
I suspect the explosion of template instances for any non-trivial uses would also explode compile times?
I confess that I never worry about this. I see this concern raised all the time but I never notice it in my own work. One thing I do to is avoid the "convenience" headers and include only the headers I actually know I need so maybe that helps. Also for the serialization library I use make a library component - maybe that speeds up the build of all the applications. also CMake and B2 only recompile changed stuff, so that seems to help also. Anyway - for me this doesn't ever seem a real problem - especially these days.
It depends on the use case. Boost.Multiprecision is a case in point: the expression templates are relatively slow to compile (but not as bad as they used to be), most of the compile time is in just including all the headers. However, it tentatively supports user-defined literals, so you can write: constexpr uint1024_t g = 0xaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaabbbbbbbbbbbbbbbbbbbbbbbbbbccccccccccccccccccccc_cppui1024; And so on. The problem is that if you have several of these literals, the compile times really explode - decoding the literal in C++11 means recursive template parsing which in turn means 100's of instantiations per literal. Maybe C++14 or 1y can help with this, I'm not sure, if I get the time I should investigate. Using tuples, where each order of polynomial is a new instantiation causes a similar issue. Might or might not be an issue depending on typical use cases.
Of course C++17 may (or not) fix both of the above.
A simpler alternative might be:
template
class polynomial; Which implements a polynomial of maximum order N. Multiplication would have to be mod x^N as I can't think of an error handling mechanism that works for both runtime and constexpr contexts?
Well, I'm not seeing any of this. If I multiply a polynomial of order N and one of order M, I'm going to get a resultant polynomial of order M + N, am I not? Similarly with division I'll get one of M - N (ignoring the admittedly sticky question of the remainder.
And what kind of error could mulitplication of polynomials give?
From C++11: "The class length_error defines the type of objects thrown as exceptions to report an attempt to produce an object whose length exceeds its maximum allowable size."
I must say I'm quite intrigued with this for C++ 14. I can't justify spending any time on it though.
Also intrigued, but also have other things to do.... would be easy for this to feature creep into a compile-time algebra library! John.
On 10/31/15 4:50 AM, John Maddock wrote:
Also intrigued, but also have other things to do.... would be easy for this to feature creep into a compile-time algebra library!
I think it would be hard to keep out. Also I think it would be very difficult to keep it from growing. I'm actually more concerned about the demand for such a thing. I think it's very compelling. But as a member of the Program Committee for CPPcon 2015 I was very much struck by the lack of interest in topics related to mathematics and mathematical thinking. In particular there was a proposal for Algorithmic Differentiation implemented via TMP. In spite of strong advocacy on my part, other committee members were convinced that it was too advanced mathematically for the expected attendees. They might well have been right - if they were - it's even more disturbing to me. One thing that we really, really, really need in Boost is better feedback on which libraries are actually used and how much they are used. I feel like we're spending a lot of time developing stuff that few if any programmers actually find useful. Sometimes they're right and the ideas just aren't that useful and other times they're wrong and they're just don't get it. This goes double for the C++ committee. How is it that years of effort and discussion and development can be invested in "Concepts" while getting Boost authors to include "concepts" in their documentation and boost concept checking in their code is like pulling teeth. I feel like I'm really missing something. Robert Ramey
I think it would be hard to keep out. Also I think it would be very difficult to keep it from growing. I think this is a general problem in boost: feature creep is all over
Hi Robert, I think you are asking the right questions On 2015-10-31 17:10, Robert Ramey wrote: the place. Considering the little manpower that is actually developing boost, adding new features is likely to have long term consequences as it is much easier to add stuff to boost than to remove it. There is also very little oversight, once a library is accepted. This is a result of the boost model: people propose their libraries to boost, but boost does not have a list of "open problems". As there are no requirements on what the libraries priorities are, there is only little guidelines regarding the time after acceptance. This model also has an impact of the later point you mentioned: while there are people actively developing new libraries for boost, there is not much checking whether it is relevant. People who do not understand the problem are likely to stay away from the review and thus biasing the review process: "I have never heard about coroutines or fibers, apparently this is a thing. Someone else should review it before I write something stupid".
I'm actually more concerned about the demand for such a thing. I think it's very compelling. But as a member of the Program Committee for CPPcon 2015 I was very much struck by the lack of interest in topics related to mathematics and mathematical thinking. In particular there was a proposal for Algorithmic Differentiation implemented via TMP. In spite of strong advocacy on my part, other committee members were convinced that it was too advanced mathematically for the expected attendees. They might well have been right - if they were - it's even more disturbing to me.
I agree, this is disturbing especially considering how much scientific high performance code is written in C++. But I think this is a symptom of an underlying problem: there are no math standards in the C++ world, no standardized libraries or any agreement on interfaces. Therefore people are spread over many incompatible libraries and thus the impact of a single boost library is limited to the small fraction of scientific programmers that happen to be compatible to that interface. For example we have uBLAS as one competitor for linear algebra, but currently Eigen and Armadillo are just better. As boost has impact, there are a few projects which try to be compatible to uBLAS (e.g. viennaCL and the thing i rolled myself) but still this does not give this library some magical impact because Eigen is so much better. I would say that uBLAS is therefore a "lost cause" as it has so many features that there is no way that the performance problems can be solved within the given development constraints. now, coming back to the TMP automdiff library: if there is no standardized way to represent vectors and the boost way is not favorable, there is also no standardized way to represent vector valued functions let alone their derivatives. Without this, automatic differentiation is just not as useful as working with single dimensional functions is almost trivial, especially as there are many tools which just spit out the right derivative and implementing that is only a few lines of code. On the other hand, as someone who wrote a few thousand lines of derivatives of vector-valued functions in machine learning, I would love to see a high performance TMP solution that I could just plug-in. Best, Oswin
I think it would be hard to keep out. Also I think it would be very difficult to keep it from growing. I think this is a general problem in boost: feature creep is all over
On 2015-10-31 17:10, Robert Ramey wrote: the place. Considering the little manpower that is actually developing boost, adding new features is likely to have long term consequences as it is much easier to add stuff to boost than to remove it. There is also very little oversight, once a library is accepted.
Nod. Generally though, I think most libraries stay within their core mission. Interesting though, the topic that started this - the polynomial class in Boost.Math - is a definite case of feature creep: that class was definitely not reviewed when the library was accepted, it was (and currently still is) a semi-documented implementation detail. I'm reasonably easy about making small improvements, but ultimately, it could probably use a re-design, substantially better implementation, and perhaps a mini-review or something before moving out of the "internal details" section of the docs.
This is a result of the boost model: people propose their libraries to boost, but boost does not have a list of "open problems". As there are no requirements on what the libraries priorities are, there is only little guidelines regarding the time after acceptance. This model also has an impact of the later point you mentioned: while there are people actively developing new libraries for boost, there is not much checking whether it is relevant. People who do not understand the problem are likely to stay away from the review and thus biasing the review process: "I have never heard about coroutines or fibers, apparently this is a thing. Someone else should review it before I write something stupid".
It's always been the case that we've sometimes struggled to find enough domain-experts to adequately review a niche library. That said, we do *not* require reviewers to be capable of designing and/or implementing the library under review, just potential users, as in "my only use case for this is X, but this only does Y" etc.
now, coming back to the TMP automdiff library: if there is no standardized way to represent vectors and the boost way is not favorable, there is also no standardized way to represent vector valued functions let alone their derivatives.
Indeed. However, a vector of values is a concept, perhaps backed up by a traits class to access the elements, so the actual type could be fairly opaque, perhaps at the expense of code readability.
Without this, automatic differentiation is just not as useful as working with single dimensional functions is almost trivial, especially as there are many tools which just spit out the right derivative and implementing that is only a few lines of code. On the other hand, as someone who wrote a few thousand lines of derivatives of vector-valued functions in machine learning, I would love to see a high performance TMP solution that I could just plug-in.
One of the problems here, is that tools like Mathematica (and hence wolframalpha) are just so darn good, it would be nice if these tools could produce C++ code as output to save on the cut-and-paste, but really they're going to be very hard to compete with. I also worry somewhat about blindly using a black-box solution - if you use a template metaprogram to calculate the first N derivatives and evaluate them, how do you know that they're actually numerically stable? Sometimes casting a mark 1 eyeball over the formulae can save a lot of grief later (and sometimes not of course). OK, so there are intervals, but those have issues too. Of course this does nothing to detract from the ultimate coolness of the idea ;) Best, John.
On 11/1/15 1:46 AM, John Maddock wrote:
Of course this does nothing to detract from the ultimate coolness of the idea ;)
Well, maybe the coolness of an idea is inversely proportional to it's impenetrability. After all, if it can't be implemented, there are no unhappy users. Robert Ramey
On 11/1/15 1:46 AM, John Maddock wrote:
One of the problems here, is that tools like Mathematica (and hence wolframalpha) are just so darn good,
it would be nice if these tools could produce C++ code as output to save on the cut-and-paste, but really they're going to be very hard to compete with.
Hmm - I'm not seeing this. for the questions being asked - take the symbolic derivative - there is only one answer. How can one tool be better than another?
I also worry somewhat about blindly using a black-box solution - if you use a template metaprogram to calculate the first N derivatives and evaluate them, how do you know that they're actually numerically stable?
we have that same problem with all the TMP stuff - and with normal user code ! At least with library code we have all our eggs in one basket - and we can watch the basket!
Sometimes casting a mark 1 eyeball over the formulae can save a lot of grief later (and sometimes not of course).
OK, so there are intervals, but those have issues too.
Ahhh - more feature creep!
Hmm - I'm not seeing this. for the questions being asked - take the symbolic derivative - there is only one answer. How can one tool be better than another? It is true that there is mathematically only one solution, however its
Hi, formulation is not unique and some formulations might be better suited for numerical implementation. Take for example the sinc(x)=sin(x)/x function as some term in the computed derivative. A tool that returns the term "sin(x)/x" will cause trouble for x=0. if it returns "sinc(x)" i can profit from the fact that someone implemented the function in a numerically stable way. Best, Oswin
On 10/31/2015 12:10 PM, Robert Ramey wrote:
On 10/31/15 4:50 AM, John Maddock wrote:
Also intrigued, but also have other things to do.... would be easy for this to feature creep into a compile-time algebra library!
I think it would be hard to keep out. Also I think it would be very difficult to keep it from growing.
I'm actually more concerned about the demand for such a thing. I think it's very compelling. But as a member of the Program Committee for CPPcon 2015 I was very much struck by the lack of interest in topics related to mathematics and mathematical thinking. In particular there was a proposal for Algorithmic Differentiation implemented via TMP. In spite of strong advocacy on my part, other committee members were convinced that it was too advanced mathematically for the expected attendees. They might well have been right - if they were - it's even more disturbing to me.
One thing that we really, really, really need in Boost is better feedback on which libraries are actually used and how much they are used. I feel like we're spending a lot of time developing stuff that few if any programmers actually find useful. Sometimes they're right and the ideas just aren't that useful and other times they're wrong and they're just don't get it.
This goes double for the C++ committee. How is it that years of effort and discussion and development can be invested in "Concepts" while getting Boost authors to include "concepts" in their documentation and boost concept checking in their code is like pulling teeth.
Perhaps because "concepts" are not part of C++ and unless concepts become codified Boost authors have nothing to work with in adding "concepts" to their library.
I feel like I'm really missing something.
On November 2, 2015 12:30:16 AM EST, Edward Diener
On 10/31/2015 12:10 PM, Robert Ramey wrote:
This goes double for the C++ committee. How is it that years of
effort
and discussion and development can be invested in "Concepts" while getting Boost authors to include "concepts" in their documentation and boost concept checking in their code is like pulling teeth.
Perhaps because "concepts" are not part of C++ and unless concepts become codified Boost authors have nothing to work with in adding "concepts" to their library.
Concepts are about documenting the operations required of a parameterizing type combined with semantics. BCCL goes a long way towards checking the former, while names and documentation can manage the latter. Concepts in the language will be nicer, of course. ___ Rob (Sent from my portable computation engine)
On 11/1/15 9:30 PM, Edward Diener wrote:
Perhaps because "concepts" are not part of C++ and unless concepts become codified Boost authors have nothing to work with in adding "concepts" to their library.
Sure they have something to work with. The Boost Concepts Checking library has been available since 2002 !!! And nothing would prevent library authors from including concepts in their documentation. This of course is my main point. If no one is interested in using what's currently available, how is making something else available going to make a difference? Not even the boost review process insists that library authors use concepts in their documentation - and this has been around since the original SGI documentation from 1995 - 20 years ago. CPP Reference uses concepts - but very little other documentation does. Robert Ramey
On Mon, Nov 02, 2015 at 06:54:10AM -0800, Robert Ramey wrote:
On 11/1/15 9:30 PM, Edward Diener wrote:
Perhaps because "concepts" are not part of C++ and unless concepts become codified Boost authors have nothing to work with in adding "concepts" to their library.
Sure they have something to work with. The Boost Concepts Checking library has been available since 2002 !!!
Hi Robert, The boost graph library and property_maps library make extensive use of concepts. And most of these libraries were written between 1997 and 2001 in the c++98 dialect. Even though I strongly believe these important libraries are begging for an updated version 2 written in c++14, they are a work of art in my opinion. One can learn a lot by studying their implementations and working with them. It's especially fun and instructive to write your own algorithms. Karen
And nothing would prevent library authors from including concepts in their documentation.
This of course is my main point. If no one is interested in using what's currently available, how is making something else available going to make a difference?
Not even the boost review process insists that library authors use concepts in their documentation - and this has been around since the original SGI documentation from 1995 - 20 years ago. CPP Reference uses concepts - but very little other documentation does.
Robert Ramey
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
--- end quoted text --- -- Karen Shaeffer Be aware: If you see an obstacle in your path, Neuralscape Services that obstacle is your path. Zen proverb
Not even the boost review process insists that library authors use concepts in their documentation - and this has been around since the original SGI documentation from 1995 - 20 years ago. CPP Reference uses concepts - but very little other documentation does.
True, but for template code doing so (and testing with concept archetypes etc) is nearly always very instructive IMO. I feel the need less for overload resolution based on concepts, although every now and then I have a need for "this function accepts any container of integers" or some such. Even then, type_traits + enable_if get surprisingly close. John.
John Maddock wrote:
A simpler alternative might be:
template
class polynomial; Which implements a polynomial of maximum order N. Multiplication would have to be mod x^N as I can't think of an error handling mechanism that works for both runtime and constexpr contexts?
I think that throwing an exception works in both contexts. In a constexpr context, if the path that throws is chosen, the result is a compile-time error.
participants (9)
-
Edward Diener
-
Jeremy Murphy
-
John Maddock
-
Karen Shaeffer
-
Marc Glisse
-
Oswin Krause
-
Peter Dimov
-
Rob Stewart
-
Robert Ramey