Vicente J. Botet Escriba
Le 14/06/15 21:19, Louis Dionne a écrit :
Paul Fultz II
writes: - `fold` and `reverse_fold` should be preferred over `fold_right` and ``fold_left`. This is more familiar to C++ programmers. First, to my knowledge, the only libraries that even define fold and/or reverse_fold are Fusion and MPL, so it's not like there was an undeniable precedent for using those names instead of something else in C++. But even then, `fold` and `reverse_fold` functions are provided for consistency with those libraries, so I really don't see what's the problem. If you prefer those names, you can use them and they have exactly the same semantics as their Fusion counterpart. Meta uses it also. I'm wondering if reverse_fold shouldn't accept the same function signature as fold. It shouldn't be the case for fold_left and fold_right, as the parameters of the function to apply are exchanged.
reverse_fold has the same signature as fold. It does exactly what Fusion's reverse_fold does.
fold,fold.left:F(T)×S×(S×T→S)→S fold.right:F(T)×S×(T×S→S)→S BTW, it would be nice if reverse_fold (and any function) had its own signature. reverse_fold:F(T)×S×(S×T→S)→S
Good idea; I will document the signature of reverse_fold. See [1].
- Concepts are capitalized, however, models of a concept should not be capitalized(such as `IntregralConstant`, `Either`, `Lazy`, `Optional`, `Tuple`, etc) `IntegralConstant`, `Tuple`, etc... are tags used for tag dispatching, like Fusion's `vector_tag` & friends. I can't use a non-capitalized version as-is, because it's going to clash with `integral_constant`. Also, I find that using something like `tuple_tag` is uglier than using `Tuple`. Consider for example
make
(xs...) to (xs) make<Tuple>(xs...) to<Tuple>(xs) I would prefer of Hana uses only CamelCase for C++17/20 Concepts or for C++14 type requirements. This will be confusing for more than one.
You have also the option to define make having a class template as parameter (See [A]).
make<_tuple>(xs...) to<_tuple>(xs)
In Hana, `make` is a variable template and `make<...>` is a function object. Unfortunately, this means that we have to make a choice between accepting a type template parameter or a template template parameter, but not both.
[...]
I would prefer also that Hana defines its concrete types, but I think this battle is lost.
It isn't a lost battle. I would also prefer to define my concrete types,
believe me, but I'm not sure how to do it without screwing users up. There
are also some compile-time issues with specifying concrete types in some
cases. For example, say I specify the concrete type of a `Set` as `_set<...>`,
and the concrete type of `int_<i>` as `_int<i>`. Then, you're allowed to write
the following (or are you?):
auto xs = hana::make_set(int_<1>, int_<2>, int_<3>);
hana::_set<_int<3>, _int<2>, _int<1>> ys = xs;
Should this work? Well, sort of, because the order of elements inside a Set
is unspecified, so any permutation of
hana::_set
- IntregralConstant is very strange. In Hana, its not a concept(even though its capitalized), but rather a so called "data type". Furthermore, because of this strangeness it doesn't interoperate with other IntregralConstants(such as from Tick) even though all the operators are defined. IntegralConstant is not a concept, that's true. The fact that it does not interoperate with Tick's IntegralConstants out-of-the-box has nothing to do with that, however. You must make your Tick integral constants a model of Hana's Constant concept for it to work. See the gist at [2] for how to do that. This is capital to the understanding of Hana. Hana has no automatic mapping. You must state explicitly that a type is a model of a Concept and I like it. This doesn't follow however the current trend of C++17/20 Concepts.
A type is a model of a Concept is it has an explicit mapping of its associated mapping structure.
The Constant concept could have a mcd that defaults to the nested members, and so make easier your mapping.
You're right, I've been thinking about that. This prompts me to open this issue [2].
I would prefer if Hana used a different name for his Concepts. In addition Hana Concepts have an associated class, which C++7/20 Concepts are predicates. I would reserve the CamelCase for C++ Concepts and the lowercase for the mapping struct,
struct applicative { template <typename A> struct transform_impl; };
template <typename F> concept bool Applicative = requires ....;
I think it would be better to wait until we _actually_ have Concepts in the language before trying to emulate them too hard. I think it is good that Hana does not pretend to have Concepts (in the Concepts-lite sense). Doing otherwise could be misleading.
[...]
- Concepts make no mention of minimum type requirement such as MoveConstructible. I believe the right place to put this would be in the documentation of concrete models like `Tuple`, but not in the concepts (like `Sequence`). Hana's concepts operate at a slightly higher level and they do not really have a notion of storage. But I agree that it is necessary to document these requirements. Please refer to this issue [6] for status.
I don't know. The constraints must be stated where needed. There is no reason a concept shouldn't require that the underlying type is a model of some other Concept it this is needed. I believe that all the concepts make use of MoveConstructible types, or am I wrong. I agree that the documentation is not precise enough respect to this point.
I'm not saying that e.g. Iterable _should not_ document these constraints, I'm just saying it does not, at the moment, operate at such a low level. For example, an infinite stream generating the Fibonacci sequence could be a model of Iterable, but there is no notion of storage in this case. However, like I said, I agree that these requirements must be documented where they exist.
[...]
This doesn't mean that we can not have an index of all the algorithms.
An index of all the algorithms will be done.
[...]
2. Concatenating strings makes complete sense, indeed. This could be handled very naturally by defining a `Monoid` model, but it was not done because I did not like using `+` for concatenating strings . I opened this issue [8] to try and find a proper resolution. I missed the fact that Monoid introduces + for plus. The operator+ must be documented and appear on the Monoid folder.
I wouldn't be weird to use + for concatenating strings as std::string provides already operator+(). It would be much more weir to use zero/plus/+ with a monoid (int, *, 1)
This is the problem of naming the function associated to a Concept.
Haskell uses mappend and mempty? These names comes from the List Monoid. I don't like them neither. IMHO, the names of Monoid operations must be independent of the domain. A monoid is a triplet (T, op, neutral), where op is a binary operation on T and neutral is the neutral element T respect to this operation. What is wrong then with monoid::op and monoid::neutral, instead of the Hana globals plus and zero? Too verbose? Having these names on a namespace let the user make the choice.
It becomes wrong when you have more complex type classes. How would you call the Ring's operation and identity? And how would you call the operations of a Monad? eta and mu as the mathematicians do it? No, I think we have to put actual names at some point, even if that means losing some generality.
I would not provide any operators for the Monoid Concept.
I am thinking about dissociating the operators from the concepts for technical reasons; see [3]. Instead, operators would be handled for each data type.
Regarding variable templates and ODR, I thought variable templates couldn't lead to ODR violations? I know global function objects (even constexpr) can lead to ODR violations, but I wasn't aware about the problem for variable templates. I would appreciate if you could show me where the problem lies more specifically. Also, for reference, there's a defect report [9] related to global constexpr objects, and an issue tracking this problem here [10].
Finally, regarding executable bloat, we're talking about stateless constexpr objects here. At worst, we're talking 1 byte per object. At best (and most likely), we're talking about 0 bytes because of link time optimizations. Otherwise, I could also give internal linkage to the global objects and they would probably be optimized away by the compiler itself, without even requiring LTO. Am I dreaming? The best is to measure it Are there any measures of the same program using Hana and Meta?
It goes without saying that Meta will have 0 runtime overhead, since it's purely at the type level. The only question is whether Hana can do just as well. From my micro-benchmarks, I know Hana can do just as good in those cases. However, it is hard to predict exactly what will happen for non trivial programs. I think compressing the storage of empty types should make it much, much easier for the compiler to optimize everything away. Also, like I said in another answer to Roland, it is always possible to enclose any value-level computation in `decltype` to ensure that no code is actually generated. But of course this is slightly annoying. Regards, Louis [1]: https://github.com/ldionne/hana/issues/131 [2]: https://github.com/ldionne/hana/issues/132 [3]: https://github.com/ldionne/hana/issues/138