On Saturday 29 August 2015 15:29:32 Niall Douglas wrote:
On 29 Aug 2015 at 15:36, Andrey Semashev wrote:
What kind of memory corruption do you mean and how do you detect it?
If an impossible situation has occurred i.e. if one bit of data in location A says one thing and another bit of data in location B says the opposite, it is assumed to be memory corruption. It should never, ever occur in any situation other than memory corruption.
Also how terminating the process helps in fixing those errors?
If memory corruption has occurred, then known program state has been lost and the loss of other people's data is a high likelihood. Terminating the process is the only sane thing to do now.
Fatal exit happens when AFIO cannot continue without losing or corrupting other people's data. I make no apology for that - it is the right thing to do.
Why is this even possible with the library?
I always code in detection of impossible states and fatal exit in any of my code and I have done so for years. It has proven its worth time and time again. I obviously don't go overboard with O(N^2) proof of good state checks in release builds, but if detection of impossible states has a low overhead, it's worth doing as a matter of routine.
Think of it being a manually written undefined behaviour sanitiser. It's no different.
If you're trying to protect yourself from a memory failure this way then I think your approach is futile. You can't guarantee correctness with your checks, however many you insert in the code. If you want better guarantees buy a better hardware. If you're trying to protect from a program bug that results in memory corruption (e.g. through a dangling pointer) then terminating with std::terminate is useless because it doesn't help to locate and eliminate the bug. It doesn't help the customer either since his production service crashes all the time. You may save the data from corruption but again - there are no guarantees as the corrupted data may have slipped through. I guess this is better than nothing but hardly something I could count on.
And why not report an error in a usual way?
To be honest it's never come up. In use cases without memory corruption you never see this code in action. On the other hand, when someone logs a bug which mentions all this stuff printed to std::cerr, it's an excellent sign they have memory corruption problems.
So the checks never fired so far but you say they have proven their worth many times. In what way?
I think you're trying to be too smart in your library. Don't. You will never cover everything users want. There will always be someone like me who wants a different behavior than you envisioned.
I am not being too smart in my library. I am choosing sensible defaults which are useful in most use cases to most people. All this is really quality of implementation detail anyway.
Trying to protect from memory failures is one example of being too smart. This isn't the library's job to do. Console output also doesn't look like something universally accepted.
Issues have been logged for controlling the printing to std::cerr and enabling or disabling the deadlock debug printing code. If at a later stage people find they need additional behaviours in their real world use cases I have no problem adding solutions for them as well.
In general I'd discourage from adding configuration macros as they increase the possibility of config mismatch between TUs. Please consider macro-free options first.