FWIW I did look at the documentation for DI. I was quite pleased with it. Here are the things I liked: a) there was a lot effort invested in organizing the docs. b) they were quite readable c) I liked the small examples help me understand this. I've been exposed to the idea of DI in the past and I could never figure out what it was and what the purpose was. d) I liked the physical layout - very consistent with other boost libraries. (I've become skeptical of the usage of other system. They don't really add much in my view - other than one more of way of doing things.) All this is very good. BUT a) I was left with the feeling that I sort of understood it but could figure it out if I spent more time at it. This is generally better than I respond to most library documentation. b) Same goes for the motivation / use case for the library. What problem does it solve, how does it compare with other solutions, etc. So I felt we were 90% there. My personal experience in the past has made me a little gun shy. When I get to this point in my own work, I sometimes find getting that last 10% ends up being impossible. This usually turns out that that my "solution" - though it looks good - has some flaws that I didn't really understand myself. I was blind to them. Only when I try to reconcile the anomalies in the documentation does it become apparent that I've over looked some subtle aspect in the library. Could be conceptual side effects, conflict with other ideas that the library uses, or perhaps it adds more conceptual overhead then the library itself eliminates. Or something. This is the basis of my recommendation that ping-pong between writing code, examples, tests is under-appreciated as a contributor to good software. Sorry I can't be more helpful - the whole subject is sort of confusing to me. Robert Ramey