I think it would be hard to keep out. Also I think it would be very difficult to keep it from growing. I think this is a general problem in boost: feature creep is all over
On 2015-10-31 17:10, Robert Ramey wrote: the place. Considering the little manpower that is actually developing boost, adding new features is likely to have long term consequences as it is much easier to add stuff to boost than to remove it. There is also very little oversight, once a library is accepted.
Nod. Generally though, I think most libraries stay within their core mission. Interesting though, the topic that started this - the polynomial class in Boost.Math - is a definite case of feature creep: that class was definitely not reviewed when the library was accepted, it was (and currently still is) a semi-documented implementation detail. I'm reasonably easy about making small improvements, but ultimately, it could probably use a re-design, substantially better implementation, and perhaps a mini-review or something before moving out of the "internal details" section of the docs.
This is a result of the boost model: people propose their libraries to boost, but boost does not have a list of "open problems". As there are no requirements on what the libraries priorities are, there is only little guidelines regarding the time after acceptance. This model also has an impact of the later point you mentioned: while there are people actively developing new libraries for boost, there is not much checking whether it is relevant. People who do not understand the problem are likely to stay away from the review and thus biasing the review process: "I have never heard about coroutines or fibers, apparently this is a thing. Someone else should review it before I write something stupid".
It's always been the case that we've sometimes struggled to find enough domain-experts to adequately review a niche library. That said, we do *not* require reviewers to be capable of designing and/or implementing the library under review, just potential users, as in "my only use case for this is X, but this only does Y" etc.
now, coming back to the TMP automdiff library: if there is no standardized way to represent vectors and the boost way is not favorable, there is also no standardized way to represent vector valued functions let alone their derivatives.
Indeed. However, a vector of values is a concept, perhaps backed up by a traits class to access the elements, so the actual type could be fairly opaque, perhaps at the expense of code readability.
Without this, automatic differentiation is just not as useful as working with single dimensional functions is almost trivial, especially as there are many tools which just spit out the right derivative and implementing that is only a few lines of code. On the other hand, as someone who wrote a few thousand lines of derivatives of vector-valued functions in machine learning, I would love to see a high performance TMP solution that I could just plug-in.
One of the problems here, is that tools like Mathematica (and hence wolframalpha) are just so darn good, it would be nice if these tools could produce C++ code as output to save on the cut-and-paste, but really they're going to be very hard to compete with. I also worry somewhat about blindly using a black-box solution - if you use a template metaprogram to calculate the first N derivatives and evaluate them, how do you know that they're actually numerically stable? Sometimes casting a mark 1 eyeball over the formulae can save a lot of grief later (and sometimes not of course). OK, so there are intervals, but those have issues too. Of course this does nothing to detract from the ultimate coolness of the idea ;) Best, John.