On Fri, 20 Jan 2023 at 16:47, Ruben Perez
No, The IO (read) is done by each individual async_exec call, so it can parse resp3 directly in their final storage without copies (i.e. the data structure passed by the user on the adapt function).
Out of curiosity, does having the parsing done in async_exec (vs. in async_run) have to do with making no copies?
Yes, that is the main reason. It reduces latency.
2. If there are write errors in a request, it is async_run, and not async_exec, the function that gets the error is async_run, and not the individual requests queued by async_exec. Right?
Correct.
What will happen with the request posted with async_exec? Will it error with some special code, or will it just not complete until a reconnection happens?
The behavior is configurable. The user can decide whether the request should remain suspended until a reconnection occurs. The default behaviour is to complete async_exec with an error. For read only commands for example you don't care about them being executed twice (or commands like SET) for others it might be wrong to risk double execution. See https://mzimbres.github.io/aedis/classaedis_1_1resp3_1_1request.html#structa... for more information.
A push has a fixed RESP3 type (the resp3 message starts with a > character). The heuristic you are referring to handles corner cases. Say I send a command that expects no response, but I do an error in its expected syntax: The server will communicate an error as a response, in this case I have no other alternative than passing it to async_receive, as it shouldn't cause the connection to be closed, it can be further used (a questionable thing but possible).
If I'm getting this correctly, if you make a mistake and async_exec contains an invalid request, that error response is treated as a push.
No. Spelling or syntax errors in commands will be received as responses and async_exec will complete with error. There is only one corner case where it is received as a push, the SUBSCRIBE command, because it does not have a response. When Redis sends the response I am not expecting any so I must treat it like a push although it has no resp3 push type.
Some additional questions: a. Will async_exec complete successfully, or with an error?
With an error.
b. Can you provide an example of the kind of mistake that would cause this to happen?
request req; req.push("SUBSCRIBE"); // oops, forgot to set a channel. This will cause the server to send a response to a command that has no response.
c. What will happen if async_receive has not been called because no server pushes are expected? Will it deadlock the connection?
The connection can't consume anything on behalf of the user, so it will wait forever.
4. From what I've gathered, the protocol can be made full duplex (you may write further requests while reading responses to previously written requests),
That is not how I understand it, I send a request and wait for the response. That is why pipeline is so important: https://redis.io/docs/manual/pipelining/. Otherwise why would it need pipelining?
What I mean: request batch A is written. While you are waiting for the response, request batch B is enqueued using async_exec. I've understood that batch B will wait until A's response has been fully read before it's written.
Yes, that's how the implementation works.
Is there a reason to make B wait, instead of writing it as soon as A's write is complete?
My understanding is that I have to wait. RESP3 is not like e.g. the Kafka protokol where you can continuously write. I will check this further though.
The low-level API is there as a reference. I can't name any reason for using it instead of the connection, there are small exceptions which I will fix with time.
BTW, is the low-level API intended to be used by final users? As in, will it be stable and maintained?
It has been stable for over a year now, but it is not intended for end users unless perhaps in corner cases. By using it instead of the connection the user loses automatic pipelining and event demultiplexing, which is hard to implement.
Putting it in simple terms, I have observed situations where a push is in the middle of a response, that means the async_exec must be suspended and IO control passed to async_receive and then back to the suspended async_exec. This ping pong between async operations is complex to implement without async_run. I think there is a way though, we can talk more about that later.
Out of curiosity again, can you not deserialize the push from async_exec, then notify (e.g. via a condition variable or similar) to async_receive?
No. At the time a push comes, there might be no call to async_exec e.g. a subscribe-only app. See the subscriber example.
10. From what I've seen, all tests are integration tests, in that they test with an actual Redis instance.
The low_level.cpp and request.cpp are not integration tests. The others use a locally installed redis instance with the exception of the TLS test that uses db.occase.de:6380.
How do you handle parallel test executions? As in a CI running multiple jobs in parallel.
I don't run parallel tests in the CI. Improving this would be nice though.
As an alternative, you may want to use a Docker container for your CIs (which I've found to be much more practical). Let me know if you go down this path, as I've got experience.
Ok, thanks. I have used Boost.MySql as a reference to Github actions and some other things. Regards, Marcelo