Is Boost interested in a Redis client library?
Hi, I am the author of Aedis, a Redis client library built on top of Boost.Asio that provides communication with the Redis Server over its native and most recent protocol RESP3 [1]. It is an implementation from scratch that depends only on other Boost Libraries (Asio, Variant2, Optional, etc.) and targets C++14. I would like to propose it for inclusion in Boost in the near future if there is enough interest, at the moment I am interested in early feedback as I finish writing it (writing docs, improving tests etc.) - Do you think this library would be valuable to Boost? - Was the documentation helpful to understand what Aedis provides? - Does the design look good? - Any feedback is welcome. Link to Aedis Documentation: https://mzimbres.github.io/aedis/ Link to the github project: https://github.com/mzimbres/aedis If you never heard about Redis, this is the best place to start: https://redis.io Regards, Marcelo [1] https://github.com/antirez/RESP3/blob/master/spec.md
Em sex., 8 de abr. de 2022 às 12:10, Marcelo Zimbres Silva via Boost < boost@lists.boost.org> escreveu:
[...]
I'm not a redis user myself so I won't be able to comment much on this topic, but here's some early and sloppy feedback... Link to Aedis Documentation:
The dynamic buffer example from the documentation's front-page seems weird. Dynamic buffer fills the gap to turn raw IO into buffered IO. Raw IO is all about passing the read calls directly to the layer immediately below. When you're parsing streams there's no implicit framing between the messages, so you buffer. And you do need to buffer because only by chance you'd be able to receive exactly the contents of a single message. If you receive less bytes than required, keep the buffer and read more. If you receive more bytes than required, consume the current message and keep the rest for the next read. It follows that the dynamic buffer will abstract a few properties: - Capacity (non-used part of the buffer) - Ready data Then Boost.Asio will also introduce the concept of max_size to allow growing buffers with a level of protection against DoS attacks. Similar frameworks will do just the same (e.g. bufio.Scanner from Golang). But do notice a concept is still missing in Boost.Asio's dynamic buffer: a pointer/marker to the current message size. Boost.Asio's buffered IO abstraction (dynamic buffer) is different from other frameworks in this regard (e.g. Golang's bufio.Scanner) and defer this responsibility to the user (cf. read_until()'s return value). I personally don't like Boost.Asio choice here, but that's just the way it is. Now, from your example: std::string request, read_buffer, response; // ... co_await resp3::async_read(socket, dynamic_buffer(read_buffer)); // Hello (ignored). co_await resp3::async_read(socket, dynamic_buffer(read_buffer), adapt(response)); // Set co_await resp3::async_read(socket, dynamic_buffer(read_buffer)); // Quit (ignored) By recreating the dynamic buffer every time, you discard the "native" capacity property from the dynamic buffer. Also I don't see a return value to indicate the current message size so I know what to discard from the current buffer. You always discard the current message yourself (and a second argument must be supplied if the user wants to save the response). If the current message was kept in the buffer then the response could just hold string_views to it. What are your thoughts? -- Vinícius dos Santos Oliveira https://vinipsmaker.github.io/
Hi Vinìcius,
On Fri, 8 Apr 2022 at 20:20, Vinícius dos Santos Oliveira
wrote: I'm not a redis user myself so I won't be able to comment much on this topic, but here's some early and sloppy feedback...
Thanks.
The dynamic buffer example from the documentation's front-page seems weird.
Dynamic buffer fills the gap to turn raw IO into buffered IO. Raw IO is all about passing the read calls directly to the layer immediately below. When you're parsing streams there's no implicit framing between the messages, so you buffer. And you do need to buffer because only by chance you'd be able to receive exactly the contents of a single message. If you receive less bytes than required, keep the buffer and read more. If you receive more bytes than required, consume the current message and keep the rest for the next read.
It follows that the dynamic buffer will abstract a few properties:
Capacity (non-used part of the buffer) Ready data
Then Boost.Asio will also introduce the concept of max_size to allow growing buffers with a level of protection against DoS attacks.
Notice that Aedis works on the client side, which means users will be connecting to a trusted server. That means it would have to be server DoS-attacking the client, which is unknown to me i.e. not a security hole. That said, I don't have any special treatment for max_size at the moment, so reading more than permitted will throw. I will check the code to see where this could be an issue.
Similar frameworks will do just the same (e.g. bufio.Scanner from Golang). But do notice a concept is still missing in Boost.Asio's dynamic buffer: a pointer/marker to the current message size. Boost.Asio's buffered IO abstraction (dynamic buffer) is different from other frameworks in this regard (e.g. Golang's bufio.Scanner) and defer this responsibility to the user (cf. read_until()'s return value). I personally don't like Boost.Asio choice here, but that's just the way it is.
Thanks, informative.
Now, from your example:
std::string request, read_buffer, response; // ... co_await resp3::async_read(socket, dynamic_buffer(read_buffer)); // Hello (ignored). co_await resp3::async_read(socket, dynamic_buffer(read_buffer), adapt(response)); // Set co_await resp3::async_read(socket, dynamic_buffer(read_buffer)); // Quit (ignored)
By recreating the dynamic buffer every time, you discard the "native" capacity property from the dynamic buffer.
Also I don't see a return value to indicate the current message size
The async_read completion handler has the following signature void(boost::system::error_code, std::size_t), where the second argument is the number of bytes that have been read i.e. the size of the message. Users can keep it if they judge necessary, namely auto n = co_await resp3::async_read(...);
so I know what to discard from the current buffer. You always discard the current message yourself (and a second argument must be supplied if the user wants to save the response).
Yes, Aedis will consume the buffer as it parses the message, in a manner similar to the async_read_until - erase pattern [1]. Each new chunk of data is handed to the user and erased afterwards, this is efficient as it doesn't require much memory to read messages. I believe I am not alone here, I've had a look at beast and it looks like it also consumes message on its own, see https://github.com/boostorg/beast/blob/17141a331ad4943e76933e4db51ef48a170aa... The downside of the *async_read_until - erase* pattern however is that it rotates the buffer constantly with x.consume(n)), adding some latency. However, this could be mitigated with a dynamic_buffer implementation that consumes the content less eagerly (in exchange to increased memory use).
If the current message was kept in the buffer then the response could just hold string_views to it. What are your thoughts?
As I said above, response adapters will have to act on each new chunk of resp3 data, afterwards that memory will be overwritten with new data when the buffer is rotated and won't be available anymore. I consider this a good thing, for example, if you are receiving json files from Redis, it is better to deserialize it as content becomes available than keeping it in intermediate storage to be processed later. I can't come up with a use case where this could be desirable. In any case I made a lot of effort to avoid temporaries. I also don't see a way around it, sooner or later the buffer will have to be consumed. Regards, Marcelo [1] https://www.boost.org/doc/libs/1_78_0/doc/html/boost_asio/reference/async_re...
Em sex., 8 de abr. de 2022 às 16:52, Marcelo Zimbres Silva < mzimbres@gmail.com> escreveu:
Now, from your example:
std::string request, read_buffer, response; // ... co_await resp3::async_read(socket, dynamic_buffer(read_buffer)); // Hello (ignored). co_await resp3::async_read(socket, dynamic_buffer(read_buffer), adapt(response)); // Set co_await resp3::async_read(socket, dynamic_buffer(read_buffer)); // Quit (ignored)
By recreating the dynamic buffer every time, you discard the "native" capacity property from the dynamic buffer.
Also I don't see a return value to indicate the current message size
The async_read completion handler has the following signature void(boost::system::error_code, std::size_t), where the second argument is the number of bytes that have been read i.e. the size of the message. Users can keep it if they judge necessary, namely
auto n = co_await resp3::async_read(...);
so I know what to discard from the current buffer. You always discard the current message yourself (and a second argument must be supplied if the user wants to save the response).
Yes, Aedis will consume the buffer as it parses the message, in a manner similar to the async_read_until - erase pattern [1]. Each new chunk of data is handed to the user and erased afterwards, this is efficient as it doesn't require much memory to read messages.
Can you clarify what you mean by "erased afterwards"? Afterwards when? Before or after the delivery to the user? When? I need to know when before I can comment much further. It's not clear to me at all how aedis manages the buffer. If the buffer were an internal implementation detail (as in wrapping the underlying socket) I wouldn't care, but as it is... it's part of the public interface and I must understand how to use it. The downside of the *async_read_until - erase* pattern however is that
it rotates the buffer constantly with x.consume(n)), adding some latency.
However, this could be mitigated with a dynamic_buffer implementation that consumes the content less eagerly (in exchange to increased memory use).
Golang's bufio.Scanner implementation will avoid excessive memcopies to the head of the buffer by using a "moving window" over the buffer. It only uses the tail of the buffer to new read operations. Only when the buffer fully fills it'll memcpy the current message to the head of the buffer as to have more space.
If the current message was kept in the buffer then the
response could just hold string_views to it. What are your thoughts?
As I said above, response adapters will have to act on each new chunk of resp3 data, afterwards that memory will be overwritten with new data when the buffer is rotated and won't be available anymore.
I consider this a good thing, for example, if you are receiving json files from Redis, it is better to deserialize it as content becomes available than keeping it in intermediate storage to be processed later. I can't come up with a use case where this could be desirable.
The pattern to parse the textual protocol is simple: get message, process
message, discard message.
Upon accumulating a whole message, you decode its fields.
Does redis's usage pattern feel similar to this? If it doesn't, then how
does it differ? If it differs, I should reevaluate my thoughts for this
discussion.
As for "[rather] than keeping it in intermediate storage", that's more
complex. The deserialized object *is* intermediate storage. The question
is: can I use pointers to the original stream to put less pressure on the
allocator (even if we customize the allocator, the gains only accumulate)?
For instance, suppose the deserialized object is map
On Sat, 9 Apr 2022 at 03:22, Vinícius dos Santos Oliveira
Can you clarify what you mean by "erased afterwards"? Afterwards when? Before or after the delivery to the user? When? I need to know when before I can comment much further.
Let me give you an example, a map data type with two elements looks like this on the wire "%2\r\n$4\r\nkey1\r\n$6\r\nvalue1\r\n$4\r\nkey2\r\n$6\r\nvalue2\r\n" The parser will start reading the message header with async_read_until (\r\n) and see it is a map with size 2. This information will be passed to the user by means of a callback (adapter in my examples) callback({type::map, 2, ...}, ...); after that the read operation will consume the "%2\r\n" and the buffer content will be reduced to "$4\r\nkey1\r\n$6\r\nvalue1\r\n$4\r\nkey2\r\n$6\r\nvalue2\r\n" Reading the next element works likewise but now the element is a blob type and not a map. The parser reads the header to know the size of the blob (again with read_until) and "$4\r\n" is consumed, reducing the buffer to "key1\r\n$6\r\nvalue1\r\n$4\r\nkey2\r\n$6\r\nvalue2\r\n" then it reads the blob "key1" with a read of size 6 (two more to consume \r\n) and passes that info to the user callback({type::blob_string, 1, 1, "key1"}, ...); "key1\r\n" is then consumed resulting in a buffer "$6\r\nvalue1\r\n$4\r\nkey2\r\n$6\r\nvalue2\r\n" The same procedure will be applied to the remaining elements until the map is completely processed. In simple words, as soon as data becomes available it is passed to the user and consumed. callback(...) x.consume(n)
It's not clear to me at all how aedis manages the buffer. If the buffer were an internal implementation detail (as in wrapping the underlying socket) I wouldn't care, but as it is... it's part of the public interface and I must understand how to use it.
Sure, does the explanation above make things clearer?
Golang's bufio.Scanner implementation will avoid excessive memcopies to the head of the buffer by using a "moving window" over the buffer. It only uses the tail of the buffer to new read operations. Only when the buffer fully fills it'll memcpy the current message to the head of the buffer as to have more space.
That is something I would like to see in Asio. It would definitely improve performance.
The pattern to parse the textual protocol is simple: get message, process message, discard message.
Upon accumulating a whole message, you decode its fields.
What is the point of accumulating the whole message if I am done with
what has already been read? If I am reading a Redis Hash with millions
of elements in a std::unordered_map
Does redis's usage pattern feel similar to this? If it doesn't, then how does it differ? If it differs, I should reevaluate my thoughts for this discussion.
I hope these points were also addressed in the comments above. If not, please ask.
As for "[rather] than keeping it in intermediate storage", that's more complex. The deserialized object *is* intermediate storage. The question is: can I use pointers to the original stream to put less pressure on the allocator (even if we customize the allocator, the gains only accumulate)? For instance, suppose the deserialized object is map
: for (;;) { dynamic_buffer buf; map
result; auto message_size = read(socket, buf, result); process(result); buf.consume(message_size); } Now the container is cheaper.
Ditto. This is indeed something nice which I would like to support. But as I said above I don't know how to achieve this with the current asio buffers and whether this is really useful. Regards, Marcelo
Em sáb., 9 de abr. de 2022 às 05:36, Marcelo Zimbres Silva < mzimbres@gmail.com> escreveu:
[...] does the explanation above [example spanning several paragraphs] make things clearer?
Yes. The pattern is much clearer now. Thank you. I'll need to think more about the problem, but for now I'm tending to sympathize with your approach — consuming as it parses. Keeping the message in the buffer is not as much of a problem as you think. The memory usage will not be greater (std::string, for instance, will hold not only the string itself, but an extra possibly unused area reserved for SSO). However the pattern here might indeed favour fragmented allocations more. It might be more important to stress the allocators than trying to avoid them. Is there any redis command where anything resembles a need for a "streaming reply"? How does Aedies deal with it? Is there support to deserialize directly to an object type? For instance: struct MyType { int x; std::string y; }; Also, you should add a page on the documentation comparing Aedis to other libraries (e.g. cpp-bredis).
Golang's bufio.Scanner implementation will avoid excessive
memcopies to the head of the buffer by using a "moving window" over the buffer. It only uses the tail of the buffer to new read operations. Only when the buffer fully fills it'll memcpy the current message to the head of the buffer as to have more space.
That is something I would like to see in Asio. It would definitely improve performance.
You don't need to use dynamic buffers tho. Dynamic buffer tries to abstract buffered IO. The important trait present in dynamic buffer is: as the buffer is external to the IO object, it allows one to steal the buffered data to parse (as in finding the new end of message markers) and consume some slice using a different parser. Do not confuse this trait with the example you gave earlier about JSON; it's a different trait here. Honestly I think the "abstraction" dynamic buffer abstracts too little (it's quite literally a region of buffered data + capacity which you can implement by declaring one or two members in your own class) to offer a value worth pursuing. What really offers ease of use to the final user is formatted IO (which must be done on top of buffered IO — be it external or internal). Your library abstracts formatted redis IO. You could just as well buffer the data yourself (as in wrapping the underlying socket in a new class that keeps the state machine plus buffer there). For instance, C++'s <iostream> (which usually is *not* a good place to look for inspirations anyhow) merges formatted and buffered IO in a single place (and so do the standard scanners for many other languages), and that's fine. If you think composition of "parsers" is important (as in the trait enabled by dynamic buffer; which I don't think it matters to redis) to Aedis you can just add a method to return the buffered data and another to set_assume the new buffered data. It's a little more complex than that (you need to ensure the buffer will outlive the socket as to not UB on outstanding async operations, and plenty of other smallish details), but not rocket science either. Do notice how the *protocol* dictates the usage pattern for the buffered data here. It doesn't always make sense to decouple buffered IO and formatted IO. Keep the buffer yourself and ignore what everyone else is doing. You don't need to use every Boost.Asio class just because the class exists (that'd be absurd). The layers are: raw/low-level IO: Boost.Asio abstractions are good and were refined little-by-little over its many years of existence buffered IO: far too little to worry about; and Boost.Asio never really pursued the topic beyond an almost-toy abstraction formatted IO: complex and it helps to have abstractions from external libraries Do notice as well that buffered IO is a dual: input and output. We only talked about buffered input. For buffered output, what you really want is either (1) improving performance by writing data in batches, or (2) avoiding the problems that originate from composed operations and concurrent writers (and several approaches would exist anyway — from fiber-level mutexes to queueing sockets), and Boost.Asio doesn't even offer o solution here (which should be the job of buffered output). However, that doesn't concern Aedis. I'm just trying to persuade you to not worry too much about dynamic buffer. -- Vinícius dos Santos Oliveira https://vinipsmaker.github.io/
Hi,
On Sat, 9 Apr 2022 at 20:45, Vinícius dos Santos Oliveira
Keeping the message in the buffer is not as much of a problem as you think. The memory usage will not be greater (std::string, for instance, will hold not only the string itself, but an extra possibly unused area reserved for SSO). However the pattern here might indeed favour fragmented allocations more. It might be more important to stress the allocators than trying to avoid them.
I had a look yesterday at the implementation of asio::dynamic_buffer
and think it would be simple to make x.consume(n) consume less
eagerly. All that would be needed is an accumulate parameter that
would change the behaviour of x.consume(n) to add an offset instead of
std::string::erase'ing the buffer.
std::string buffer;
std::vector
Is there any redis command where anything resembles a need for a "streaming reply"? How does Aedies deal with it?
RESP3 has two stream data types 1. Streamed string. 2. Streamed aggregates. Aedis supports 1. AFAIK, there is no Redis command using any of them.
Is there support to deserialize directly to an object type? For instance:
struct MyType { int x; std::string y; };
I have add one example, please see https://github.com/mzimbres/aedis/blob/master/examples/low_level/sync_serial.... The example serializes the structure you asked for in binary format to simplify the example. I expect however that most users will use json.
Also, you should add a page on the documentation comparing Aedis to other libraries (e.g. cpp-bredis).
Todo.
Honestly I think the "abstraction" dynamic buffer abstracts too little (it's quite literally a region of buffered data + capacity which you can implement by declaring one or two members in your own class) to offer a value worth pursuing. What really offers ease of use to the final user is formatted IO (which must be done on top of buffered IO — be it external or internal). Your library abstracts formatted redis IO. You could just as well buffer the data yourself (as in wrapping the underlying socket in a new class that keeps the state machine plus buffer there). For instance, C++'s <iostream> (which usually is *not* a good place to look for inspirations anyhow) merges formatted and buffered IO in a single place (and so do the standard scanners for many other languages), and that's fine.
This is what the high-level client does: https://mzimbres.github.io/aedis/intro_8cpp_source.html
Do notice how the *protocol* dictates the usage pattern for the buffered data here. It doesn't always make sense to decouple buffered IO and formatted IO. Keep the buffer yourself and ignore what everyone else is doing. You don't need to use every Boost.Asio class just because the class exists (that'd be absurd).
Sure, Aedis was a bottom up approach that used the building blocks provided by Asio. At a certain point I may need my own concepts and everything, at the moment however those building blocks are suiting me well.
The layers are:
raw/low-level IO: Boost.Asio abstractions are good and were refined little-by-little over its many years of existence buffered IO: far too little to worry about; and Boost.Asio never really pursued the topic beyond an almost-toy abstraction formatted IO: complex and it helps to have abstractions from external libraries
Do notice as well that buffered IO is a dual: input and output. We only talked about buffered input. For buffered output, what you really want is either (1) improving performance by writing data in batches,
If I understand you correctly, this is also what Aedis does, it is called pipeline in Redis: https://redis.io/docs/manual/pipelining/
or (2) avoiding the problems that originate from composed operations and concurrent writers (and several approaches would exist anyway — from fiber-level mutexes to queueing sockets),
I use a queue to prevent concurrent writes, like everybody else I'm afraid. Thank you for all the input btw. Regards, Marcelo
Em dom., 10 de abr. de 2022 às 07:49, Marcelo Zimbres Silva < mzimbres@gmail.com> escreveu:
On Sat, 9 Apr 2022 at 20:45, Vinícius dos Santos Oliveira
wrote: Keeping the message in the buffer is not as much of a problem as you think. The memory usage will not be greater (std::string, for instance, will hold not only the string itself, but an extra possibly unused area reserved for SSO). However the pattern here might indeed favour fragmented allocations more. It might be more important to stress the allocators than trying to avoid them.
I had a look yesterday at the implementation of asio::dynamic_buffer and think it would be simple to make x.consume(n) consume less eagerly. All that would be needed is an accumulate parameter that would change the behaviour of x.consume(n) to add an offset instead of std::string::erase'ing the buffer.
std::string buffer; std::vector
vec; auto dbuffer = dynamic_buffer(buffer, ..., accumulate); resp3::read(socket, dbuffer, adapt(vec)); // buffer still contains the data. and then, after using the response, users would clear all accumulated data at once
dbuffer.clear();
Well you don't even need to call consume(n) within the adapters. You might just as well return the value nconsumed. By the end of resp3::read() we'd know how much bytes. However as I've said earlier I'm now sympathizing with you that your current approach plays more nicely with the redis usage pattern. I'll trust you as the redis expert on this one. And just a warning: I'll keep commenting on this sub-topic for as long as there are replies. As soon as you feel we're derailing from Aedis discussion you should create a new topic in the mailing list right on the spot.
Is there any redis command where anything resembles a need
for a "streaming reply"? How does Aedies deal with it?
RESP3 has two stream data types
1. Streamed string. 2. Streamed aggregates.
Aedis supports 1. AFAIK, there is no Redis command using any of them.
Okay.
Is there support to deserialize directly to an object
type? For instance:
struct MyType { int x; std::string y; };
I have add one example, please see
https://github.com/mzimbres/aedis/blob/master/examples/low_level/sync_serial... .
The example serializes the structure you asked for in binary format to simplify the example. I expect however that most users will use json.
Okay. I see the following snippet taken from Aedis' documentation:
// Sends a container, with key.
sr.push_range(command::hset, "key", map);
map's value_type is pair
high-level client does: https://mzimbres.github.io/aedis/intro_8cpp_source.html
This class detracts a lot from Boost.Asio's style. I'll borrow an explanation that was already given before: [...] makes the user a passive party in
the design, who only has to react to incoming requests. I suggest that you consider a design that is closer to the Boost.Asio design. Let the user become the active party, who asks explicitly for the next request.
-- https://lists.boost.org/Archives/boost/2014/03/212072.php Now, why does it matter? Take a look at this post: https://vorpus.org/blog/some-thoughts-on-asynchronous-api-design-in-a-post-a... Nathaniel goes a long way describing a lot of problems with Python network APIs. He just misses to pinpoint their origin. Due to missing the origin of the problem he'll obsess over something he has been calling "structured concurrency" in later blog posts which I suggest you ignore. Anyway, the origin/solution of all problems with Python "callback" APIs is very simple: Just follow Boost.Asio style and you'll be covered (and right now you aren't). That is not to mean that callbacks are forbidden. Boost.Asio itself uses callbacks: < https://www.boost.org/doc/libs/1_78_0/doc/html/boost_asio/reference/ssl__stream/set_verify_callback/overload1.html>. But do notice the subtleties here. When Boost.Asio uses callbacks it's more like std::find_if() using a function object. Boost.Asio won't turn you into "a passive party in the design, who only has to react to incoming requests". Boost.Asio will keep you as the active party at all times. Meanwhile the problems described by Nathaniel are more like callbacks acting almost as signal-handlers (a global "CPU time" resource that will call a callback in-between any of the scheduler's rounds, and don't really belong to any user's CPS chain sequence[1]). [1] Continuation-passing style just emulates a user-level thread so we could borrow analogies from the thread world too if we wanted, such as SIGEV_SIGNAL/SIGEV_THREAD. -- Vinícius dos Santos Oliveira https://vinipsmaker.github.io/
On Tue, 12 Apr 2022 at 06:28, Vinícius dos Santos Oliveira
This class detracts a lot from Boost.Asio's style. I'll borrow an explanation that was already given before:
[...] makes the user a passive party in the design, who only has to react to incoming requests. I suggest that you consider a design that is closer to the Boost.Asio design. Let the user become the active party, who asks explicitly for the next request.
-- https://lists.boost.org/Archives/boost/2014/03/212072.php
IMO, he is confusing *Boost.Asio design* with *High vs low level design*. Let us have a look at this example from Asio itself https://github.com/boostorg/asio/blob/a7db875e4e23d711194bcbcb88510ee298ea29... The public API of the chat_session is class chat_session { public: void start(); void deliver(const std::string& msg); }; One could also erroneously think that the deliver() function above is "not following the Asio style" because it is not an async function and has no completion token. But in fact, it has to be this way for a couple of reasons - At the time you call deliver(msg) there may be an ongoing write, in which case the message has to be queued and sent only after the ongoing write completes. - Users should be able to call deliver from inside other coroutines. That wouldn't work well if it were an async_ function. Think for example on two chat-sessions, sending messages to one another coroutine() // Session1 { for (;;) { std::string msg; co_await net::async_read(socket1, net::dynamic_buffer(msg), ...); // Wrong. co_await session2->async_deliver(msg); } } Now if session2 becomes unresponsive, so does session1, which is undesirable. The read operation should never be interrupted by others IO operations. These are some of the reasons why it has a simple implementation void deliver(const std::string& msg) { write_msgs_.push_back(msg); timer_.cancel_one(); } i.e. push the message in a queue and signal that to the writer coroutine, so it can write it. This is also what my send() and send_range() function do. Now to the start() function, as you see in the link, it will spawn two coroutines, one that keeps reading from the socket and one that keeps waiting for new messages to be written. As you can see, it doesn't communicate errors to the user. In Aedis, I reworked this function so it can communicate errors and renamed it to async_run. For that to be possible I use a parallel group https://github.com/mzimbres/aedis/blob/27b3bb89fbbec6acd8268f839f310e8d2b5a1... The explanation above should also make it clear why some way or another a high level api that loops on async_read and async_write will end up having a callback. Users must somehow get their code called after each operation completes.
Now, why does it matter? Take a look at this post: https://vorpus.org/blog/some-thoughts-on-asynchronous-api-design-in-a-post-a...
Nathaniel goes a long way describing a lot of problems with Python network APIs. He just misses to pinpoint their origin. Due to missing the origin of the problem he'll obsess over something he has been calling "structured concurrency" in later blog posts which I suggest you ignore.
Anyway, the origin/solution of all problems with Python "callback" APIs is very simple: Just follow Boost.Asio style and you'll be covered (and right now you aren't).
That is not to mean that callbacks are forbidden. Boost.Asio itself uses callbacks: https://www.boost.org/doc/libs/1_78_0/doc/html/boost_asio/reference/ssl__str.... But do notice the subtleties here. When Boost.Asio uses callbacks it's more like std::find_if() using a function object. Boost.Asio won't turn you into "a passive party in the design, who only has to react to incoming requests". Boost.Asio will keep you as the active party at all times.
Meanwhile the problems described by Nathaniel are more like callbacks acting almost as signal-handlers (a global "CPU time" resource that will call a callback in-between any of the scheduler's rounds, and don't really belong to any user's CPS chain sequence[1]).
[1] Continuation-passing style just emulates a user-level thread so we could borrow analogies from the thread world too if we wanted, such as SIGEV_SIGNAL/SIGEV_THREAD.
I believe this has also been addressed above. Marcelo
Em ter., 12 de abr. de 2022 às 05:11, Marcelo Zimbres Silva < mzimbres@gmail.com> escreveu:
On Tue, 12 Apr 2022 at 06:28, Vinícius dos Santos Oliveira
wrote: This class detracts a lot from Boost.Asio's style. I'll borrow an explanation that was already given before:
[...] makes the user a passive party in the design, who only has to react to incoming requests. I suggest that you consider a design that is closer to the Boost.Asio design. Let the user become the active party, who asks explicitly for the next request.
-- https://lists.boost.org/Archives/boost/2014/03/212072.php
IMO, he is confusing *Boost.Asio design* with *High vs low level design*. Let us have a look at this example from Asio itself
https://github.com/boostorg/asio/blob/a7db875e4e23d711194bcbcb88510ee298ea29...
That's not a library. That's an application. The mindset for libraries and applications differ. An application chooses, designs and applies policies. A library adopts the application's policies. A NodeJS application, for instance, will have a http.createServer() and a callback that gets called at each new request. How, then, do you answer questions such as "how do I defer the acceptance of new connections during high-load scenarios?". Boost.Asio OTOH never suffered from such problems. And then we have DeNo (NodeJS's successor) that gave up on the callback model: https://deno.com/blog/v1#promises-all-the-way-down It has nothing to do with high-level vs low-level. It's more like "policies are built-in and you can't change them". The public API of the chat_session is
class chat_session { public: void start(); void deliver(const std::string& msg); };
One could also erroneously think that the deliver() function above is "not following the Asio style" because it is not an async function and has no completion token. But in fact, it has to be this way for a couple of reasons
That's an application, not a library. It has hidden assumptions (policies) on how the application should behave. And it's not even real-world, it's just an example. - At the time you call deliver(msg) there may be an ongoing write,
in which case the message has to be queued and sent only after the ongoing write completes.
You can do the same with async_*() functions. There are multiple approaches. As an example: https://sourceforge.net/p/axiomq/code/ci/master/tree/include/axiomq/basic_qu... - Users should be able to call deliver from inside other coroutines.
That wouldn't work well if it were an async_ function. Think for example on two chat-sessions, sending messages to one another
coroutine() // Session1 { for (;;) { std::string msg; co_await net::async_read(socket1, net::dynamic_buffer(msg), ...);
// Wrong. co_await session2->async_deliver(msg); } }
Now if session2 becomes unresponsive, so does session1, which is undesirable. The read operation should never be interrupted by others IO operations.
Actually, it *is* desirable to block. I'll again borrow somebody else's explanation: Basically, RT signals or any kind of event queue has a major fundamental
queuing theory problem: if you have events happening really quickly, the events pile up, and queuing theory tells you that as you start having queueing problems, your latency increases, which in turn tends to mean that later events are even more likely to queue up, and you end up in a nasty meltdown schenario where your queues get longer and longer.
This is why RT signals suck so badly as a generic interface - clearly we cannot keep sending RT signals forever, because we'd run out of memory just keeping the signal queue information around.
-- http://web.archive.org/web/20190811221927/http://lkml.iu.edu/hypermail/linux... However, if you wish to use this fragile policy on your application, an async_*() function following Boost.Asio style won't stop you. You don't need to pass the same completion token to every async operation. At one call (e.g. the read() call) you might use the Gor-routine token and at another point you might use the detached token: https://www.boost.org/doc/libs/1_78_0/doc/html/boost_asio/reference/detached... std::string msg; co_await net::async_read(socket1, net::dynamic_buffer(msg), use_awaitable); session2->async_deliver(msg, net::detached); You only need to check whether async_deliver() clones the buffer (if it doesn't then you can clone it yourself before the call). Right now you're forcing all your users to go through the same policy. That's the mindset of an application, not a library. The explanation above should also make it clear why some way or
another a high level api that loops on async_read and async_write will end up having a callback. Users must somehow get their code called after each operation completes.
It's clear, but the premises are wrong.
Now, why does it matter? Take a look at this post:
https://vorpus.org/blog/some-thoughts-on-asynchronous-api-design-in-a-post-a...
[...]
I believe this has also been addressed above.
Not at all. I just had to mention queueing theory problems which was one of the problems the aforementioned blog post touches on. There are more. -- Vinícius dos Santos Oliveira https://vinipsmaker.github.io/
On Tue, 12 Apr 2022 at 19:57, Vinícius dos Santos Oliveira
Em ter., 12 de abr. de 2022 às 05:11, Marcelo Zimbres Silva
escreveu: On Tue, 12 Apr 2022 at 06:28, Vinícius dos Santos Oliveira
wrote: This class detracts a lot from Boost.Asio's style. I'll borrow an explanation that was already given before:
[...] makes the user a passive party in the design, who only has to react to incoming requests. I suggest that you consider a design that is closer to the Boost.Asio design. Let the user become the active party, who asks explicitly for the next request.
-- https://lists.boost.org/Archives/boost/2014/03/212072.php
IMO, he is confusing *Boost.Asio design* with *High vs low level design*. Let us have a look at this example from Asio itself
https://github.com/boostorg/asio/blob/a7db875e4e23d711194bcbcb88510ee298ea29...
That's not a library. That's an application.
That is why I pointed you at the specific line in the file, where the chat_participant class is defined and not to the application as a whole.
A NodeJS application, for instance, will have a http.createServer() and a callback that gets called at each new request. How, then, do you answer questions such as "how do I defer the acceptance of new connections during high-load scenarios?".
That's a general question to a problem I am not trying to solve. Discussing this here will only make things more confusing.
Boost.Asio OTOH never suffered from such problems.
Of course, not even Boost.Beast that is built on top of Asio suffers from this problem as it provides only a low-level HTTP library.
And then we have DeNo (NodeJS's successor) that gave up on the callback model: https://deno.com/blog/v1#promises-all-the-way-down
Can't comment. I know nothing about Deno. The fact that they are giving up on callback doesn't mean anything to me.
It has nothing to do with high-level vs low-level. It's more like "policies are built-in and you can't change them".
I disagree.
The public API of the chat_session is
class chat_session { public: void start(); void deliver(const std::string& msg); };
One could also erroneously think that the deliver() function above is "not following the Asio style" because it is not an async function and has no completion token. But in fact, it has to be this way for a couple of reasons
That's an application, not a library. It has hidden assumptions (policies) on how the application should behave. And it's not even real-world, it's just an example.
Ditto. I pointed you at a specific line not to the application.
- At the time you call deliver(msg) there may be an ongoing write, in which case the message has to be queued and sent only after the ongoing write completes.
You can do the same with async_*() functions. There are multiple approaches. As an example: https://sourceforge.net/p/axiomq/code/ci/master/tree/include/axiomq/basic_qu...
Although I don't really want to comment I am not familiar with I will, as after a short glance I spotted many problems 1. It uses two calls to async_write whereas in Chris' example there is only one. 2. It tries to do the same thing as the deliver() function I pointed you at but poorly. You see, it has to call async_write again in the completion of the previous async_write when the queue is not empty. 3. It uses callback btw. which you were arguing against. 4. It doesn't seem to handle concurrency correctly by means of Asio primitives e.g. strands. It uses a lock free queue. I don't think any IO object should do this. It is doing the same thing that Chris is doing in his example, but poorly.
- Users should be able to call deliver from inside other coroutines. That wouldn't work well if it were an async_ function. Think for example on two chat-sessions, sending messages to one another
coroutine() // Session1 { for (;;) { std::string msg; co_await net::async_read(socket1, net::dynamic_buffer(msg), ...);
// Wrong. co_await session2->async_deliver(msg); } }
Now if session2 becomes unresponsive, so does session1, which is undesirable. The read operation should never be interrupted by others IO operations.
Actually, it *is* desirable to block.
I am chatting with my wife and my mother on a chatting app. My wife gets on a train and her connection becomes slow and unresponsive. As a result I can't chat with my mother because my session is blocked trying to deliver a message to my wife. Are you claiming this is a good thing?
I'll again borrow somebody else's explanation:
Basically, RT signals or any kind of event queue has a major fundamental queuing theory problem: if you have events happening really quickly, the events pile up, and queuing theory tells you that as you start having queueing problems, your latency increases, which in turn tends to mean that later events are even more likely to queue up, and you end up in a nasty meltdown schenario where your queues get longer and longer.
This is why RT signals suck so badly as a generic interface - clearly we cannot keep sending RT signals forever, because we'd run out of memory just keeping the signal queue information around.
-- http://web.archive.org/web/20190811221927/http://lkml.iu.edu/hypermail/linux...
I thought this is one reason why Asio executors are a good and necessary thing. You can implement a fair treatment of events. Even without executors you can impose a limit on how much queues are allowed to grow. This quotation is however misplaced as the *desirable to block* is already wrong.
However, if you wish to use this fragile policy on your application, an async_*() function following Boost.Asio style won't stop you. You don't need to pass the same completion token to every async operation. At one call (e.g. the read() call) you might use the Gor-routine token and at another point you might use the detached token: https://www.boost.org/doc/libs/1_78_0/doc/html/boost_asio/reference/detached...
That's a misunderstanding of the problem I am trying to solve.
std::string msg; co_await net::async_read(socket1, net::dynamic_buffer(msg), use_awaitable); session2->async_deliver(msg, net::detached);
You only need to check whether async_deliver() clones the buffer (if it doesn't then you can clone it yourself before the call).
Right now you're forcing all your users to go through the same policy. That's the mindset of an application, not a library.
No. I don't have any police, don't really know what you are talking about. I am only presenting - async_connect - async_write - async_read - and two timer.async_wait as a single composed operation to the user. So that they don't have to do that themselves, each one of them.
Not at all. I just had to mention queueing theory problems which was one of the problems the aforementioned blog post touches on. There are more.
It is fine to bring up new topics. But at a certain time I was not sure whether it was a criticism of a specific part of my design or just a discussion about something you consider important. Marcelo
Em ter., 12 de abr. de 2022 às 17:38, Marcelo Zimbres Silva < mzimbres@gmail.com> escreveu:
That's not a library. That's an application.
That is why I pointed you at the specific line in the file, where the chat_participant class is defined and not to the application as a whole.
The abstraction is built for the specific application only. It'll carry its policies builtin.
A NodeJS application, for instance, will have a
http.createServer() and a callback that gets called at each new request. How, then, do you answer questions such as "how do I defer the acceptance of new connections during high-load scenarios?".
That's a general question to a problem I am not trying to solve. Discussing this here will only make things more confusing.
Boost.Asio won't solve specific problems either. The rule is: just don't make it worse. If you just follow Boost.Asio's style, you don't need to solve this problem, and nobody will suffer.
Boost.Asio OTOH never suffered from such problems.
Of course, not even Boost.Beast that is built on top of Asio suffers from this problem as it provides only a low-level HTTP library.
Well, I've built a HTTP library higher level than Beast and Boost.Asio style didn't force me to go lower-level not even a bit. However, let's take examples from somewhere else. This one even has bugs with respect to executors & completion tokens (in the past I've successfully wrapped AZMQ's functions in the past to fix their bugs): https://github.com/zeromq/azmq Check one of their examples: https://github.com/zeromq/azmq/blob/master/doc/examples/actor/main.cpp AZMQ won't make you deal with explicit queueing. AZMQ won't even make you worry about connection ordering or disconnections (we can call async_read() earlier than the remote's endpoint is configured). AZMQ even deals with subscriptions, all under a simple socket API that is very much Boost.Asio-like (you'll even find set_option() functions just like Boost.Asio native sockets).
- At the time you call deliver(msg) there may be an ongoing write,
in which case the message has to be queued and sent only after the ongoing write completes.
You can do the same with async_*() functions. There are multiple approaches. As an example:
https://sourceforge.net/p/axiomq/code/ci/master/tree/include/axiomq/basic_qu...
Although I don't really want to comment I am not familiar with I will, as after a short glance I spotted many problems
1. It uses two calls to async_write whereas in Chris' example there is only one.
2. It tries to do the same thing as the deliver() function I pointed you at but poorly. You see, it has to call async_write again in the completion of the previous async_write when the queue is not empty.
You're writing to a stream. The function correctly serializes the stream by ordering writes and avoiding a corrupt stream. Each time you call async_write(), post semantics are used and then you can have multiple concurrent CPS chains (in a fiber world, it'd be easy to explain: you'd allow multiple fibers). 3. It uses callback btw. which you were arguing against.
You're missing the point: "consider a design that is closer to the Boost.Asio design. Let the user become the active party, who asks explicitly for the next request." That's not hard to follow. Examples were plenty. Before Boost.Asio 1.54 we didn't have completion tokens. Everything was callbacks at every layer. However they're different types of callbacks. Boost.Asio refers to them as completion handlers. They're one-shot callbacks, not signal handlers that keep being called for unrelated events. They only deal with the completion of an event that you explicitly initiated. Completion tokens are just syntax sugar for that. That's why it maps so well to coroutines. They're CPS chains. They poorly emulate threads/fibers[1], but many of the concepts still apply by analogy. One of the very few places where it uses callbacks that are not part of CPS-chains is this: https://www.boost.org/doc/libs/1_72_0/doc/html/boost_asio/reference/ssl__str... However, set_verify_callback() is more like the composition for an algorithm (as it happens in std::find_if()). set_verify_callback() by itself won't schedule new operations. set_verify_callback() is not part of a framework that will make "the user a passive party in the design, who only has to react to incoming requests". Boost.Asio has a style and you're not following it. 4. It doesn't seem to handle concurrency correctly by means of Asio
primitives e.g. strands. It uses a lock free queue. I don't think any IO object should do this.
That's one of the approaches — a thread-safe socket (you don't need to wrap it under a strand). I've said earlier that many approaches exist. I'm giving an example at every step of any comment.
std::string msg;
co_await net::async_read(socket1, net::dynamic_buffer(msg), use_awaitable); session2->async_deliver(msg, net::detached);
You only need to check whether async_deliver() clones the buffer (if it doesn't then you can clone it yourself before the call).
Right now you're forcing all your users to go through the same policy. That's the mindset of an application, not a library.
No. I don't have any police, don't really know what you are talking about. I am only presenting
- async_connect - async_write - async_read - and two timer.async_wait
as a single composed operation to the user. So that they don't have to do that themselves, each one of them.
Okay. My impression was that Aedis do this in a loop, as in: for (;;) { // ... read(...) // call one callback and repeat the loop } Is Aedis' pattern different (as in a single-shot operation where the callback is called only once)? I think it's worth clarifying its behavior before we continue our discussion, so I haven't addressed all of your comments (maybe there's nothing to address depending on the answer here). Does the user call async_run() in a loop, or does async_run() has a loop inside already? [1] https://www.boost.org/doc/libs/1_72_0/doc/html/boost_asio/overview/core/hand... -- Vinícius dos Santos Oliveira https://vinipsmaker.github.io/
On Wed, 13 Apr 2022 at 01:29, Vinícius dos Santos Oliveira
https://github.com/boostorg/asio/blob/a7db875e4e23d711194bcbcb88510ee298ea29...
The abstraction is built for the specific application only. It'll carry its policies builtin.
Unless you factor out those application specific *policies* to end-up with a generic *session*, that is not only useful on the chat_server, but everywhere. I am claiming you can do that without losing generality and without being incompatible with the *Asio the way*. I find myself however repeating myself often, so I think we should agree on something first, before the discussion gets even more unintelligible. So let me start again My Redis client class, which I call high-level-API, works to a large extent just like the chat_session I pointed you at https://github.com/boostorg/asio/blob/a7db875e4e23d711194bcbcb88510ee298ea29... but adapted for Redis and not a chat_server. So if you have a problem with that example, we have to clarify before proceeding. In that example you see a - reader: a loop on async_read - writer: a loop on async_write. I am also claiming there is no way to avoid these loops because - you have to keep constantly reading from a tcp socket for as long as it is open, for obvious reasons. - you need to loop on async_write to process the message queue. If I don't provide these loops in my library, users will have to write their own, simply because that is how communication of the tcp layer works. Now as to why you can't avoid callbacks. The reader implementation looks like awaitable<void> reader() { for (std::string read_msg;;) { std::size_t n = co_await boost::asio::async_read_until(..., use_awaitable); room_.deliver(read_msg.substr(0, n)); read_msg.erase(0, n); } } As you see, the room_.deliver call is application specific and only useful for the chat server. To make it useful elsewhere, you have to call a user provided callback, so it becomes generic // generic version. awaitable<void> reader(callback ck) { for (std::string read_msg;;) { std::size_t n = co_await boost::asio::async_read_until(..., use_awaitable); ck(msg); read_msg.erase(0, n); } } The same thing applies to the writer. In other words, high-level applications that provide a reader and writer like that, will invariably have to use a callback. The biggest concern I had during development was whether co_yield were a better way to deliver the data, instead of callback. That however requires coroutine support and C++20 from all users. It is also an annoyance as users would still have to write the loops themselves again, so it is not a better solution. Marcelo
On Fri, 8 Apr 2022 at 17:10, Marcelo Zimbres Silva via Boost < boost@lists.boost.org> wrote:
Hi,
I am the author of Aedis, a Redis client library built on top of Boost.Asio that provides communication with the Redis Server over its native and most recent protocol RESP3 [1].
It is an implementation from scratch that depends only on other Boost Libraries (Asio, Variant2, Optional, etc.) and targets C++14.
Long overdue and will add a great deal of real world value. In principle I would support the proposal for inclusion on grounds of utility.
I would like to propose it for inclusion in Boost in the near future if there is enough interest, at the moment I am interested in early feedback as I finish writing it (writing docs, improving tests etc.)
- Do you think this library would be valuable to Boost? - Was the documentation helpful to understand what Aedis provides? - Does the design look good? - Any feedback is welcome.
Link to Aedis Documentation: https://mzimbres.github.io/aedis/
Link to the github project: https://github.com/mzimbres/aedis
If you never heard about Redis, this is the best place to start: https://redis.io
Regards, Marcelo
[1] https://github.com/antirez/RESP3/blob/master/spec.md
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- Richard Hodges hodges.r@gmail.com office: +44 2032 898 513 home: +376 861 195 mobile: +376 380 212
On Sat, 9 Apr 2022 at 12:08, Richard Hodges via Boost
On Fri, 8 Apr 2022 at 17:10, Marcelo Zimbres Silva via Boost
wrote: I am the author of Aedis, a Redis client library built on top of Boost.Asio that provides communication with the Redis Server over its native and most recent protocol RESP3 [1].
It is an implementation from scratch that depends only on other Boost Libraries (Asio, Variant2, Optional, etc.) and targets C++14.
Long overdue and will add a great deal of real world value.
In principle I would support the proposal for inclusion on grounds of utility.
+1 from me too. IMO, format/protocol-specific utilitarian libraries like this one (also the MySQL and URL proposed recently) should find their place in Boost. Best regards, -- Mateusz Loskot, http://mateusz.loskot.net
On Fri, 8 Apr 2022, 17:10 Marcelo Zimbres Silva via Boost, < boost@lists.boost.org> wrote:
Hi,
I am the author of Aedis, a Redis client library built on top of Boost.Asio that provides communication with the Redis Server over its native and most recent protocol RESP3 [1].
It is an implementation from scratch that depends only on other Boost Libraries (Asio, Variant2, Optional, etc.) and targets C++14.
I would like to propose it for inclusion in Boost in the near future if there is enough interest, at the moment I am interested in early feedback as I finish writing it (writing docs, improving tests etc.)
- Do you think this library would be valuable to Boost?
Although I am not a Redis user, I believe it would be valuable in Boost. I think many users would appreciate robust interfaces to widely used systems. I have proposed a MySQL client, similar to your library. - Was the documentation helpful to understand what Aedis provides?
I have had a quick glance at the docs and have some questions: which is the API you expect most of your users will be using? The higher level or the lower level? If it's the former, I would move that section to the beginning. I would also like to know the advantages of one API vs the other, like when would I use the lower-level vs the higher-level and why? - Does the design look good?
I would like to understand what are the client::async_run mechanics - when does that function complete and under what circumstances does it error? It appears that client::send is using some sort of queue before the messages are sent to Redis, is that thread-safe? How can I know if a particular operation completed, and whether there was an error or not? I would also like more info on when the callbacks of the receiver are invoked. In general, I feel that higher level interface is forcing me use callback based code, rather than following Asio's universal async mode. What is the design rationale behind that? Would it be possible to have something like client::async_connect(endpoint, CompletionToken)? - Any feedback is welcome.
Link to Aedis Documentation: https://mzimbres.github.io/aedis/
Link to the github project: https://github.com/mzimbres/aedis
If you never heard about Redis, this is the best place to start: https://redis.io
Regards, Marcelo
[1] https://github.com/antirez/RESP3/blob/master/spec.md
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On Sat, 9 Apr 2022 at 16:39, Ruben Perez
Although I am not a Redis user, I believe it would be valuable in Boost.
Thanks.
- Was the documentation helpful to understand what Aedis provides?
I have had a quick glance at the docs and have some questions: which is the API you expect most of your users will be using? The higher level or the lower level?
Difficult to say, the low level API is very useful for simple tasks, for instance, connect to Redis, perform an operation and close the connection. For example - If a Redis server dies and the client wants to perform a failover it will have to connect to one sentinel, ask for the master address and close the connection. If that sentinel is also dead it will have to ask a second one and so on. This is very simple to implement with a low level api, especially if you are using coroutines. A scenario where users won't probably want to constantly open and connections is - A http server with thousands of concurrent sessions where all of them perform operations that require communication with Redis (e.g. a chat server). You won't probably want to have one Redis connection for each http session and much less, open and close it on every operation. In other words, you need a small number of long lasting Redis-sessions. When you do that, you also start having to manage the message queue as http-sessions may send messages to Redis while the client is still waiting for a pending response. Add server pushes and pipelines to that and you clearly need the high-level API that manages that for you.
I would also like to know the advantages of one API vs the other, like when would I use the lower-level vs the higher-level and why?
Hope the comments above clarify that.
- Does the design look good?
I would like to understand what are the client::async_run mechanics - when does that function complete and under what circumstances does it error?
async_run will - Connect to the endpoint (async_connect). - Loop around resp3::async_read to keep reading for responses and server pushes. - Start an operation that writes messages when they become available (async_write + timer). It will return only when an error occurs - Any error that can occur on the Asio layer. - RESP3 errors: https://mzimbres.github.io/aedis/group__any.html#ga3e898ab2126407e62f33851b3... - Adapter errors. For example, receiving a Redis set into a std::map. See https://mzimbres.github.io/aedis/group__any.html#ga0339088c80d8133b76ac4de63... More info here: https://mzimbres.github.io/aedis/classaedis_1_1generic_1_1client.html#ab0961...
It appears that client::send is using some sort of queue before the messages are sent to Redis, is that thread-safe?
Yes, queuing is necessary, the client can only send a message if there is no pending response. No, it isn't thread safe. Users should use Asio e.g. strands for that.
How can I know if a particular operation completed, and whether there was an error or not?
async_run will only complete when an error occurs. All other events are communicated by means of a receiver callback - receiver::on_read: Called when async_read completes. - receiver::on_write: Called when asnyc_write completes. - receiver::on_push: Called when async_read completes from a server push.
I would also like more info on when the callbacks of the receiver are invoked.
In general, I feel that higher level interface is forcing me use callback based code, rather than following Asio's universal async mode. What is the design rationale behind that?
It does follow the Asio async_model. Just as async_read is implemented in terms on one or more calls to async_read_some, nothing prevents you from implementing async_run in terms of calls to async_read, async_write and async_connect, this is called a composed operation. Notice this is not something specific to my library, regardless whether HTTP, Websocket, RESP3 etc. users will invariably - Connect to the server. - Loop around async_read. - Call async_write as messages become available. That means there is no way around callbacks when dealing with long lasting connections. To prevent my users reinventing this every time, I would like to provide this facility. A question that arises is whether async_run should receive individual callbacks or a class that offers the callbacks as member functions. I decided for the later.
Would it be possible to have something like client::async_connect(endpoint, CompletionToken)?
As said above, async_connect is encapsulated in the async_run call. Regards, Marcelo
Hello Marcelo, I'm a user of Aedis on a high performance cloud service for IoT. I think it is very useful. I'll try to write and feedback soon. Kind regards, -- Felipe Magno de Almeida Owner @ Expertise Solutions www: https://expertise.dev phone: +55 48 9 9681.0157 LinkedIn: in/felipealmeida On Fri, Apr 8, 2022, 12:10 Marcelo Zimbres Silva via Boost < boost@lists.boost.org> wrote:
Hi,
I am the author of Aedis, a Redis client library built on top of Boost.Asio that provides communication with the Redis Server over its native and most recent protocol RESP3 [1].
It is an implementation from scratch that depends only on other Boost Libraries (Asio, Variant2, Optional, etc.) and targets C++14.
I would like to propose it for inclusion in Boost in the near future if there is enough interest, at the moment I am interested in early feedback as I finish writing it (writing docs, improving tests etc.)
- Do you think this library would be valuable to Boost? - Was the documentation helpful to understand what Aedis provides? - Does the design look good? - Any feedback is welcome.
Link to Aedis Documentation: https://mzimbres.github.io/aedis/
Link to the github project: https://github.com/mzimbres/aedis
If you never heard about Redis, this is the best place to start: https://redis.io
Regards, Marcelo
[1] https://github.com/antirez/RESP3/blob/master/spec.md
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
participants (6)
-
Felipe Magno de Almeida
-
Marcelo Zimbres Silva
-
Mateusz Loskot
-
Richard Hodges
-
Ruben Perez
-
Vinícius dos Santos Oliveira