niedz., 21 lut 2021 o 16:18 Niall Douglas via Boost
On 20/02/2021 22:15, Andrzej Krzemienski via Boost wrote:
Now, I choose to use the DI-library (this is where I have troubles with understanding: why would I want to do that?). I get the following result:
``` int main() { State s = di::make_injector().create<State>(0, 0, 2, 2, 1, 1); } ``` And now I get back to the situation where I am passing six ints and can easily confuse which int represents what.
I am pretty sure I am now unfairly mischaracterizing the library. But this is what I get from the motivation and tutorial sections, and the explanations I have seen so far. You cannot see this behavior in the tutorial example that is using class `app`, because most of the types in there are default-constructed or constructed from values that were themselves (perhaps recursively) default-constructed. So it looks to me that in order to appreciate this library, I have to make most of my types default-constructible.
At which point am I missing something?
I think the other answers answered most of your points, so I'll add to those only two:
1. It is hard to see the wood from the trees for C++ devs because C++ uses dependency injection all over the place in its standard library, its standard idioms, and commonplace practice. The classic example is Allocators:
std::vector
std::vector delegates the responsibility of allocating, creating, destroying and deallocating arrays of int to the user supplied service MyAllocator. This is literally Dependency Injection, and it is so commonplace in C++ as a design pattern that we don't call it that.
Purists from other languages will point out that std::vector knows the *concrete type* of the delegated service. But other than that it has no idea what the implementation is, other than it promises the side effects guaranteed by the Allocator Concept.
If you remove knowledge of _concrete_ type, and replace that with _abstract_ type, you get what we would call a visitor class in C++ - basically a bunch of pure virtual functions. This corresponds to std::pmr::vector<int> whereby the concrete implementation type for allocating, creating, destroying and deallocating arrays of int is no longer known to vector, only that there is an abstract API which promises the side effects guaranteed by the Allocator Concept.
Now, imagine that your program has some memory problem, and it only ever uses std::vector<int>. Thanks to the Dependency Injection, you get two degrees of freedom:
a) If you chose the concrete type MyAllocator, through a recompile you can inject a mock MyAllocator for testing and debug.
b) If you chose the abstract type MyPmrAllocator, you don't need to recompile your code, you simply swap the MyPmrAllocator instance you construct at the beginning which is injected into all your classes with a mock MyPmrAllocator for testing and debug.
Option a) is tractable in codebases < 100M lines of code. Option b) becomes worth it in codebases > 100M lines of code. Note that as my codebase grows, I can proactively take a decision to move from degree of freedom a) to b) without breaking all my source code i.e. I can choose for my runtime to be slower in exchange for radically reduced recompile times.
Thanks Niall. Maybe this explains the bad reception of the DI-library. If injecting dependencies is so natural to C++ and the library documentation starts from convincing me that I do not know what it is and I am STUPID (rather than SOLID), then this builds a confusion that makes it more difficult to conume the rest.
2. I just gave a specific example of the value of the two typical forms of Dependency Injection in C++, and I'm going to assume it's uncontroversial (I actually think it's an exemplar of all that's wrong with Allocators, but that is off topic for here).
Something peculiar about how we typically do Dependency Injection in C++ is that it's always *specific* and not *generalised*. If we have a problem e.g. delegation of memory allocation, we design a _specific_ dependency injected solution. What we don't do in C++ is design a _generalised_ dependency injection solution which is universal (unlike say in Java).
The advantage of a universal DI which exists everywhere is very much like why choosing Outcome is better than rolling your own result<T> type. Yes anybody can roll their own result<T> type, indeed probably most people do. But when library A has resultA<T>, and library B has resultB<T>, and library C has resultC<T>, how is a codebase dependent on all three libraries supposed to interoperate between those three libraries easily?
Most of Outcome's complexity stems from being friendly to third party resultX<T> types. I myself I have deployed Outcome in foreign codebases each using their own Result types, and Outcome can (usually) capture all of those seamlessly without loss of original information fidelity. Thus Outcome becomes "the ring to rule them all", which is its exact value proposition and why I would suppose Outcome was accepted into Boost.
What I would like to see of any Boost.DependencyInjection is the exact same "one ring to rule them all" in that it should be easy to integrate *any* bespoke third party Dependency Injection mechanism or framework into Boost.DependencyInjection such that one can *seamlessly* compose library A, library B and library C in a single application, and it all "just works".
I am having difficulties with mapping the library-interop offered by Outcome onto the similar library-interop that would be gained from such a hypothetical dependency injection library. The only way I can interpret your words is a situation where one library creates a recipe for injecting parameters to my class widget and another library uses this recipe to actually create objects of my type: ``` // library one: std::function<app> make_app = [] { logger log1{"app.logger.lo"_l}; logger log2{"app.logger.hi"_l}; renderer renderer_{"main_renderer"_l}; view view_{"main_view"_l, renderer_, log1}; model model_{"main_model"_l, log2}; // note: the other logger controller controller_{"main_controller"_l, model_, view_, log2}; // note: the other logger user user_{"main_user"_l, log1}; return app {"main_app"_l, controller_, user_}; }; // library two: void some_fun(std::function<app> make_app) { app app1 = make_app(); app1.run(); app app2 = make_app(); app2.run(); } ``` But did you have something like this in mind when you wrote the above description? And if so, what is the value added by a dedicated library, if one can achieve the same foal with std::function? Is the only motivation that in some subset of cases the library can try to deduce (hopefully correctly) from the number of arguments, how I wanted to initialize the arguments? It looks like a more adequate name for such a library is "factory", because what it does is create objects. Or maybe I am still missing something? Regards, &rzej;
I'll be frank in saying that I don't believe the current proposed Boost.DI does this. Unless I and most other people here can be convinced otherwise, my personal current expectation is that the proposed Boost.DI will be rejected, but hopefully with ample feedback on what to do for a Boost.DI v2, assuming Kris has the stamina and will.
In my personal opinion, if Boost.DI _can_ do things like seamlessly compose arbitrary Allocators, both concrete and abstract, and arbitrary other custom bespoke Dependency Injection designs from across the C++ standard library and the Boost libraries, then its tutorial ought to describe that integration just like the end of the Outcome tutorial shows three separate, independent, different error handling strategies in three separate library dependencies being seamlessly integrated into one application with Outcome doing all the donkey work between those dependencies.
I think that if the tutorial demonstrated that seamless composure in action, that would be compelling.
Niall
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost