Le 07/03/15 21:00, Louis Dionne a écrit :
Vicente J. Botet Escriba
writes: Le 07/03/15 17:38, Louis Dionne a écrit :
Vicente J. Botet Escriba
writes: [...] Do you have an equivalence in Haskell to your datatype/generalized type? No, but shooting from the hip you could approximate them with type classes. A type class that is instantiated with another type class? For example, the Tuple data type could be implemented as
-- The different actual representations for tuples data Tuple0 = Tuple0 data Tuple1 a = Tuple1 a -- etc...
-- The Tuple generalized type representing all those guys class Tuple t where len :: t -> Int
instance Tuple Tuple0 where len _ = 0
As I already said on this list during the last informal review, what I make is a tradeoff between mathematical correctness and usability of the library. While Hana provides structures that satisfy the laws rigorously, I want users to be able to use those structures in a quick and dirty way as a mean of being more productive when needed. For example, I would not object to someone writing [pseudocode]:
auto to_string = [](auto x) { /* convert x to a std::string */ }; auto xs = hana::make_tuple(1, 2.2, "abc"); auto ys = hana::transform(xs, to_string);
Since `1`, `2.2` and `"abc"` do not share a common data type and the signature of `transform` is (for a Functor F)
transform : F(T) x (T -> U) -> F(U)
, the above should normally be ill-formed. However, the library makes such usage possible because I kept in mind that we're working in a "dirty" heterogeneously-typed context. Bottom line: I don't think heterogeneous types are dirty. We need just a different type of function (Overloaded) to transform them.
if you have an heterogeneous type as pair for example, the mathematical transform function should
transform : pair(T,U) x (T -> R) x (U -> S) -> pair(R, S)
As C++ has overloaded functions we don't need to pass two functions, but just an overloaded function. That's an interesting point of view. However, with this approach, you can't say that Pair is a Functor. It instead becomes a BiFunctor [2]. Right. A Functor has only one type parameter. In Haskell you can not overload functions so that the functions must be named differently. But in C++ we can use the same name, so we could have a transform : BiFunctor x BiCallable -> BiFunctor. Furthermore, let's say you have a pair(T, T).
Admittedly, the above is missing something fundamental if we want to implement other functions like `head`, but that reaches the limit of my Haskell-fu. I think looking at Haskell's type families [1] might provide more insight. Does it means that you are associating a "principal" type class (obtained by the function datatype) to a data type, that is used to instantiate the other type classes? transform is then
transform : pair(T, T) x (T -> R) x (T -> S) -> pair(R, S)
transform should hence still receive two functions, regardless of the fact that we could provide a single overloaded function.
pair(T, T) can be seen as an instance of Functor as list(T) does.
We need an alias
template <class T>
using pairT = pair
Let me now use a tuple instead of a pair as an example because it sticks more closely to what happens in Hana (Pair is not a Functor in Hana).
tuple
transform then becomes:
transform : Tuple(T1, ..., Tn) x (T1 -> R1) x ... x (Tn -> Rn) -> Tuple(R1, ..., Rn)
First, if you use this approach, you can't say that Tuple is a Functor. It has to be a N-Functor, which is the N-argument equivalent to the BiFunctor. Furthermore, you have to provide N different functions to be able to transform it. So for example, when transforming a tuple(int, int, int), you would have to say
transform(tuple(1, 2, 3), [](int i) { return i + 1; }, [](int i) { return i + 1; }, [](int i) { return i + 1; } )
As pair
That's not "heterogeneous" programming; it's just an equivalent way of writing
r1 = f1(t1) r2 = f2(t2) ... rn = fn(tn)
where `tk` are the elements of a tuple, and `fk` are the functions mapped on it by the N-Functor's `transform`. What we actually want is
r1 = f(t1) r2 = f(t2) ... rn = f(tn)
where `f` is defined as
f : (intersection of the data types of the tuple's elements) -> (some other type) I'm sure doing such kind of transformations is useful. However the semantic doesn't match to the transform(fmap) function associated to the Functor I know.
In other words, `f` is a function whose domain is "the set of all objects with which I can do X". If there are objects in the tuple that don't support "X", then the program is ill-formed. Concepts will put us much closer to this.
Do you mean C++17/20 Concepts? Couldn't "type classes" help here?
The principle behind Hana's generalized types is really that we "lift" the types of the objects and then consider objects up-to-different-representations. I also think it can be seen as a quotient of the set of all types by the equivalence relation
T ~ U if and only if T and U model exactly the same concepts in exactly the same way
Concretely, the mathematical universe we're working with is not only a set; it's actually the C++ category where objects are Types and morphisms are functions. Hence, that "quotient" would have to also take morphisms into account. But I'm reaching the end of my math-fu now.
Thanks for the clarification. I believe I start to understand your generalized type design/concept. I was looking for another design/concept closer to a "type constructor".
[...]
How the fact to have either and maybe<A> makes it more dificult to interact with heterogeneous objects? Let's try to implement a type _maybe<A>:
template <typename A> struct _maybe { struct empty { };
union { A a; empty e; } val; bool is_nothing;
constexpr _maybe() : val{empty{}}, is_nothing{true} { } constexpr _maybe(A a) : val{a}, is_nothing{false} { } };
Now, let's try to implement, for example, the `maybe` function, which has signature maybe :: B x (A -> B) x _maybe<A> -> B. Remember that those A's and B's are actually generalized types, not usual C++ types (we're working with heterogeneous stuff):
template
constexpr auto maybe(Default def, F f, M m) { return m.is_nothing ? def : f(m.val.a); } Seems legit? This works:
maybe(1, [](int i) { return i + 1; }, _maybe<int>{1});
But this does not, even though std::tuple<> and std::tuple<int> have the same generalized type:
maybe( std::tuple<>{}, [](int i) { return std::make_tuple(i); }, _maybe<int>{1} );
It fails with
error: incompatible operand types ('tuple<(no argument)>' and 'tuple<int>') return m.is_nothing ? def : f(m.val.a); ^ ~~~ ~~~~~~~~~~
This seems normal to me. Does it mean that classes as experimental::optional<T> couldn't be seen as an instance of Hana Functor type class?
m.is_nothing is not a constant expression inside the function. If it was, we could use `bool_
` to use overloading to pick which branch we're executing instead of doing it at runtime, like we do right now. This is explained in the section on the limitations of constexpr [3].
Does it means that your data types Maybe and Either are usable only as meta-programming tools? I don't see where could I use them at run-time.
[...]
Notice meta::quote. I wonder if the meta::transform couldn't take care of quote. If it takes care of quote, then we need to always pass templates to higher order algorithms. Basically, that would be like accepting metafunctions instead of metafunction classes in the MPL interface. There are good reasons not to do that. I see.
[...]
My point was more on how the library is checking the fist argument of Transform is a Functor and that the second argument is a "function" from the Functor's underlying type T to another type U.
transform : Functor<T> x (T -> U) -> Functor(U)
If the implementation declares transform as
auto transform = [] (auto F, auto Fun)
Couldn't the the library check that the arguments passed in F and Fun respect the requirements? Yes, but that would require `Fun` stating that it is a function from data type T to data type U, which is impractical. In particular it means that this:
transform(make_tuple(1, 2, 3), [](int i) { return i + 1; });
would not work. Instead one would need to write
transform(make_tuple(1, 2, 3), hana::function
([](int i) { return i + 1; })); Then, say you have
transform(make_tuple(1, std::string{"abc"}, my_class{}), hana::function??(???)>([](auto x) { ... }));
The way I see it, it's just not convenient. Doing it would definitely force us to be mathematically correct though. Instead, I aim to catch most programming errors by checking that F is a Functor (which is easy). If you send in a random function, then the compiler will tell you where you're wrong, but Hana won't.
Maybe it could be a different "transform" function that applies to ProductTypes instead of to Functors. The requirement on Fun could be that Fun is callable with any of the types on the ProductType and that each one of these overloads should have the same result U. I think that I need to understand better the scope of the library, is Hana a pure functional programming library or another way to do pure meta-programming or both at the same time, but restricted to the limitations of the intersection? Best, Vicente