Em dom., 10 de abr. de 2022 às 07:49, Marcelo Zimbres Silva < mzimbres@gmail.com> escreveu:
On Sat, 9 Apr 2022 at 20:45, Vinícius dos Santos Oliveira
wrote: Keeping the message in the buffer is not as much of a problem as you think. The memory usage will not be greater (std::string, for instance, will hold not only the string itself, but an extra possibly unused area reserved for SSO). However the pattern here might indeed favour fragmented allocations more. It might be more important to stress the allocators than trying to avoid them.
I had a look yesterday at the implementation of asio::dynamic_buffer and think it would be simple to make x.consume(n) consume less eagerly. All that would be needed is an accumulate parameter that would change the behaviour of x.consume(n) to add an offset instead of std::string::erase'ing the buffer.
std::string buffer; std::vector
vec; auto dbuffer = dynamic_buffer(buffer, ..., accumulate); resp3::read(socket, dbuffer, adapt(vec)); // buffer still contains the data. and then, after using the response, users would clear all accumulated data at once
dbuffer.clear();
Well you don't even need to call consume(n) within the adapters. You might just as well return the value nconsumed. By the end of resp3::read() we'd know how much bytes. However as I've said earlier I'm now sympathizing with you that your current approach plays more nicely with the redis usage pattern. I'll trust you as the redis expert on this one. And just a warning: I'll keep commenting on this sub-topic for as long as there are replies. As soon as you feel we're derailing from Aedis discussion you should create a new topic in the mailing list right on the spot.
Is there any redis command where anything resembles a need
for a "streaming reply"? How does Aedies deal with it?
RESP3 has two stream data types
1. Streamed string. 2. Streamed aggregates.
Aedis supports 1. AFAIK, there is no Redis command using any of them.
Okay.
Is there support to deserialize directly to an object
type? For instance:
struct MyType { int x; std::string y; };
I have add one example, please see
https://github.com/mzimbres/aedis/blob/master/examples/low_level/sync_serial... .
The example serializes the structure you asked for in binary format to simplify the example. I expect however that most users will use json.
Okay. I see the following snippet taken from Aedis' documentation:
// Sends a container, with key.
sr.push_range(command::hset, "key", map);
map's value_type is pair
high-level client does: https://mzimbres.github.io/aedis/intro_8cpp_source.html
This class detracts a lot from Boost.Asio's style. I'll borrow an explanation that was already given before: [...] makes the user a passive party in
the design, who only has to react to incoming requests. I suggest that you consider a design that is closer to the Boost.Asio design. Let the user become the active party, who asks explicitly for the next request.
-- https://lists.boost.org/Archives/boost/2014/03/212072.php Now, why does it matter? Take a look at this post: https://vorpus.org/blog/some-thoughts-on-asynchronous-api-design-in-a-post-a... Nathaniel goes a long way describing a lot of problems with Python network APIs. He just misses to pinpoint their origin. Due to missing the origin of the problem he'll obsess over something he has been calling "structured concurrency" in later blog posts which I suggest you ignore. Anyway, the origin/solution of all problems with Python "callback" APIs is very simple: Just follow Boost.Asio style and you'll be covered (and right now you aren't). That is not to mean that callbacks are forbidden. Boost.Asio itself uses callbacks: < https://www.boost.org/doc/libs/1_78_0/doc/html/boost_asio/reference/ssl__stream/set_verify_callback/overload1.html>. But do notice the subtleties here. When Boost.Asio uses callbacks it's more like std::find_if() using a function object. Boost.Asio won't turn you into "a passive party in the design, who only has to react to incoming requests". Boost.Asio will keep you as the active party at all times. Meanwhile the problems described by Nathaniel are more like callbacks acting almost as signal-handlers (a global "CPU time" resource that will call a callback in-between any of the scheduler's rounds, and don't really belong to any user's CPS chain sequence[1]). [1] Continuation-passing style just emulates a user-level thread so we could borrow analogies from the thread world too if we wanted, such as SIGEV_SIGNAL/SIGEV_THREAD. -- Vinícius dos Santos Oliveira https://vinipsmaker.github.io/