On Sun, Jan 15, 2023 at 7:17 AM Klemens Morgenstern via Boost
Aedis is a Redis client library built on top of Boost.Asio that implements the latest version of the Redis communication protocol RESP3.
This is my review for Boost.Aedis. I am an Asio expert and a Redis ignoramus. In fact I had no idea what Redis was until I heard about Aedis.
Please explicitly state that you either *accept* or *reject* the inclusion of this library into boost.
I believe we should ACCEPT this library for the following reasons: * The API is lean, efficient, and works * The library uses Asio correctly * Redis seems popular and useful * Aedis is a prerequisite for building higher-level abstractions. I have studied the interface and implementation of the library and also asked the author questions on the C++ Language Slack. There is always a solid rationale for design choices, and often they harmonize with the way Redis works (at least, I think they do... I'm not a Redis expert but I am learning). I have confidence that Marcelo will be able to maintain and evolve this library to the high standards that users expect from Boost. There is value in Boost providing a comprehensive set of libraries for implementing cloud services. We already have Asio, Beast, JSON, URL, MySQL, and ancillary libraries such as Describe and the world's fastest Unordered maps. With Aedis and the possible upcoming Mustache, and some new networking libraries I am writing, there is a compelling case that the "monolithic Boost" can be the premiere collection for building network applications and services in C++. I spent several hours evaluating the library, looking at the interface and the implementation, and studying the tests and examples. I set breakpoints in Visual Studio and stepped right into Marcelo's innermost thoughts. I opened the Kimono so to speak. On a side note, now that I have learned how Redis structures its responses and the complexities involved - Marcelo has done a fine job of finding an efficient and simple data structure (the resp3::node) and accompanying I/O algorithms to represent it but in a way that gives callers flexibility in how they want to consume the results. Plus, it is implicitly allocator-capable and permits optimizations without intrusive API declarations (like Allocator template parameters). Installing ------------ Using vcpkg on Windows I installed both 32-bit and 64-bit OpenSSL. Then I built the Visual Studio 2019 Solution for Aedis using this line: cmake -G "Visual Studio 16 2019" -A x64 -B bin64 -DCMAKE_TOOLCHAIN_FILE=C:/vcpkg/scripts/buildsystems/vcpkg.cmake Visual Studio opened the solution with no problems. It has a bunch of C++17 and C++20 examples along with tests and several other projects. Right away I was able to build with zero errors, and run the tests and examples. It seems these programs do expect to see a local Redis server on port 6379 which I didn't have set up but I was provided a private remote server and the examples worked fine with them. New Boost libraries still require Jamfiles so those will need to be added. Answers to Questions ------------------------------
- Does this library bring real benefit to C++ developers for real world use-case?
Well...I guess? A Google search for +"Redis" produces 54 million results. A Google search for +"Redis" +"C++" produces 7 million results. If these numbers mean anything, it indicates there is demand. If we assume that Redis is useful then it follows this library would be useful, as it provides an idiomatic asynchronous client for interacting with Redis servers. .....three days later.... Anyway now that I have used the examples and studied Redis more I can say that Redis is pretty damn cool. It is simple yet powerful in a way that the stuffy and confining structure of SQL isn't. The combination of Beast, Aedis, and a cloud Redis instance is enough to implement a limitless number of different types of web applications.
- Do you have an application for this library?
Me personally, nothing specific. But I can see how it could be used to build those web applications.
- Does the API match with current best practices?
The library adopts Asio's "universal model for asynchronous operations" which is a very sensible choice given that the library is based on Asio. I would consider Aedis to be a LOW-LEVEL library. That is, it represents a solid foundation upon which higher level abstractions can be built. I also believe that there is "nothing between Aedis and Asio", which is precisely what you want for a low-level library. It hits the right balance - the connection object does useful things for you such as multiplexing and reconnecting, while the request and response interfaces avoid getting in your way or making too many assumptions. It was pointed out in another review that the API should use enums or something more type-safe. I agree that this could be an improvement but I think this sort of thing is what a higher-level abstraction would provide. Aedis can be the foundation for that higher-level abstraction. Such an abstraction might use Boost.Describe and/or Boost.JSON for example.
- Is the documentation helpful and clear?
FUCK NO; It's like my ex-wife; beautiful but useless. This is the weakest part of the library. Fortunately the API is simple enough that if you stare at or copy and paste the examples you can get a decent start. Sometimes the docs are outright perplexing for example in the Responses section it says "below is a request with a compile time size" but what? No that can't be right, the compiler has no idea that the request has 3 elements. Anyway the docs need a lot of work but I wouldn't hold up the library for that.
- Did you try to use it? What problems or surprises did you encounter?
Yep I played around with the examples. The adapt() model is kind of hard to wrap my head around, if I use a tuple then the number of types are fixed at compile time but when I run a Redis command the number of results can vary so how does this work? What's this max_read_size business? It says maximum size of the read buffer but there is no real explanation of what this means.
- What is your evaluation of the implementation?
The implementation is basically perfect. It uses all of the Asio best practices. The author invented great solutions to the problems which arise specifically because of how the Redis interface works. In particular the way that async_exec and async_run are designed allows for fully optimized interaction with the Redis server, as well as maximum flexibility in terms of how the caller will use it with Asio. Suggestions ---------------- * Add example code snippets to all of the Javadocs * Add a real "Reference" which links every public symbol in the library to a page explaining what it is. For example here's the Reference for Boost.JSON https://www.boost.org/doc/libs/1_81_0/libs/json/doc/html/json/quickref.html * Add an interface target that has all the header files so that they can be browse in Visual Studio's Solution Explorer. * resp3::node needs a variadic constructor or something to be able to pass arguments to the contained String member's constructor. * resp3::node could use some protection (static_assert maybe) to prevent stupidity like using const or reference types for String * "String" as a concept needs a concise definition in the documentation. Example: https://www.boost.org/doc/libs/master/libs/url/doc/html/url/concepts/charset... * "resp3::type" as a type name is not technically *wrong* but could be confusing as hell since it is usually the return value of metafunctions. I use "kind" in my libraries. But this is a personal choice. * resp3::to_string returns a char pointer instead of a string_view? (and isn't marked noexcept) * resp3::is_aggregate is not constexpr. I could see a higher-level wrapper around aedis wanting to determine at compile-time if a type constant is an aggregate in order to provide a more type-safe API. * ResponseAdapter needs a concept specification * detail::ignore_response needs to be public since it is being used as a default template parameter in a public function * resp3::read and resp3::write are synchronous, and look like they don't use the connection object. Consider removing them, only offer features that have demonstrable need. Add them back only if they are requested. * What's the point of separating the read and write functions into different headers? Its not like you can only do one or the other... Callers will always need both sets of function declarations. * if you keep write() then either Request needs a concept definition or it should not be a template parameter. * Consider using tag_invoke instead of the current to_bulk and from_bulk customization points. * to_bulk has a Request template parameter, which needs a concept definition. * Implementation functions like add_blob and add_header could call reserve() on the string to prevent * resp3::request might return string_view from payload(), which is a nice form of type-erasure. Currently it returns `std::basic_string< char, std::char_traits<char>, std::pmr::polymorphic_allocator<char>>` which is quite a mouthful. * The Range template parameter in push_range needs a concept definition or you need to use an existing concept from the C++ Standard. Is it a ForwardRange? Is it a range of some type? You got some explainin' to do! * The docs for push_range which accepts two iterators does not list the constraints on the iterator's value type. From the implementation it looks like it needs something resembling a string but you need to explain that. Perhaps anything convertible to string_view? Here's an example: https://github.com/boostorg/url/blob/47ae94a6455ece36c39db965978264cefa302bd... * In function templates that use concepts I would use either enable_if or static_assert if the type does not meet the requirements. I usually prefer static_assert because it gives the user a nice error message and you can put a comment there. Example: https://github.com/boostorg/url/blob/33de44de1b4a22a77a73090adfa783217ed5210... * "Key" is a template parameter in some places and there's no concept definition or Constraints clause...I sense a pattern here. * Remove the interfaces which allow for allocator customization (via memory_resource / pmr). There is no motivating use-case and no benchmarks. If users request it you can add it back later without breaking anything. Spoiler Alert: The library is already written in a way that allows optimal allocation patterns, I am skeptical that using a custom allocator can improve it. But everyone is welcome to try :). You better bring receipts though. If this library is accepted into Boost then it should receive work on the documentation and a tune-up along the lines of all my Suggestions above, before it should be integrated into the release. Note however that my ACCEPT is without conditions. -- Regards, Vinnie Follow me on GitHub: https://github.com/vinniefalco