On 06/29/2017 09:01 AM, jupiter wrote:
Thanks for the response.
The server program is calling boost::asio::async_read for waiting for client's data, I am wondering why the async_read didn't fire an error when the remote socket connection is gone. I thought that boost::asio::async_read should detect lost connection. The fact is it does in most of the situations when the client program is ended, but it cannot detect when the client device suddenly lost power connection, I think that might because TCP only send syn / ack during connection / disconnection, the syn/ack would not be active after establishing the connection, correct me if I am wrong here.
You are right, if I have to use keep alive, I'll use it in user level which could better control the time and use less bandwidth than the system keep alive.
Thank you.
Kind regards,
- j
The TCP connection is usually managed by your operating system (the TCP stack of it, to be precise). When your application suddenly ends or crashes, the operating system will clean up the TCP connections for you (i.e disconnect them) which is the reason why the other end sees an "immediate" disconnect in this case. Tearing down a connection involves specific signalling on the TCP layer (FIN,RST). Now imagine your situation: You have two machines, and both have a TCP connection to each other. When the TCP connection is idle (i.e. nothing is sent between the two) there won't be any signalling on the TCP layer if TCP keep-alive is disabled and there is no application layer heartbeat mechanism. You might already see the problem here: In this situation you won't ever detect whether the other machine is gone, if it suddenly disappeared (i.e. sudden power loss, or some kind of network problem) because: Disconnecting a TCP session or detecting presence always requires some kind of signalling. So, you might already see that this is not a problem specific to asio but to TCP sockets in general. Regards, Andreas