No, The IO (read) is done by each individual async_exec call, so it can parse resp3 directly in their final storage without copies (i.e. the data structure passed by the user on the adapt function).
Out of curiosity, does having the parsing done in async_exec (vs. in async_run) have to do with making no copies?
2. If there are write errors in a request, it is async_run, and not async_exec, the function that gets the error is async_run, and not the individual requests queued by async_exec. Right?
Correct.
What will happen with the request posted with async_exec? Will it error with some special code, or will it just not complete until a reconnection happens?
A push has a fixed RESP3 type (the resp3 message starts with a > character). The heuristic you are referring to handles corner cases. Say I send a command that expects no response, but I do an error in its expected syntax: The server will communicate an error as a response, in this case I have no other alternative than passing it to async_receive, as it shouldn't cause the connection to be closed, it can be further used (a questionable thing but possible).
If I'm getting this correctly, if you make a mistake and async_exec contains an invalid request, that error response is treated as a push. Some additional questions: a. Will async_exec complete successfully, or with an error? b. Can you provide an example of the kind of mistake that would cause this to happen? c. What will happen if async_receive has not been called because no server pushes are expected? Will it deadlock the connection?
4. From what I've gathered, the protocol can be made full duplex (you may write further requests while reading responses to previously written requests),
That is not how I understand it, I send a request and wait for the response. That is why pipeline is so important: https://redis.io/docs/manual/pipelining/. Otherwise why would it need pipelining?
What I mean: request batch A is written. While you are waiting for the response, request batch B is enqueued using async_exec. I've understood that batch B will wait until A's response has been fully read before it's written. Is there a reason to make B wait, instead of writing it as soon as A's write is complete?
The low-level API is there as a reference. I can't name any reason for using it instead of the connection, there are small exceptions which I will fix with time.
BTW, is the low-level API intended to be used by final users? As in, will it be stable and maintained?
Putting it in simple terms, I have observed situations where a push is in the middle of a response, that means the async_exec must be suspended and IO control passed to async_receive and then back to the suspended async_exec. This ping pong between async operations is complex to implement without async_run. I think there is a way though, we can talk more about that later.
Out of curiosity again, can you not deserialize the push from async_exec, then notify (e.g. via a condition variable or similar) to async_receive?
10. From what I've seen, all tests are integration tests, in that they test with an actual Redis instance.
The low_level.cpp and request.cpp are not integration tests. The others use a locally installed redis instance with the exception of the TLS test that uses db.occase.de:6380.
How do you handle parallel test executions? As in a CI running multiple jobs in parallel. As an alternative, you may want to use a Docker container for your CIs (which I've found to be much more practical). Let me know if you go down this path, as I've got experience. Regards, Ruben.