Andrey Semashev wrote:
On 3/4/21 10:04 PM, Peter Dimov via Boost wrote:
Andrey Semashev wrote:
Unfortunately, text_multifile_backend is supposed to open and close file on every log record, as the file name is generated from the log record.
If you maintain a small LRU cache, will it help?
That's what I had in mind, but it will have its own problems.
One of the main use cases for text_multifile_backend is when you want to maintain a separate log for each of the user's business process entity (e.g. a session or a connection of some sort). There can be any number of such entities, and I can't come up with a reasonable limit for a cache in the sink backend. Naturally, the cache will be ineffective if the number of actively logging entities exceeds its capacity. At the same time I don't want to keep any stale open files in the cache.
It seems, I'll have to maintain some sort of a timeout for closing files and cleaning up the cache. But I will only be able to perform the cleanup on log records, which means I'll still have stale open files in cache if there's no logging happening.
Keeping open files is not only a concern from resource usage perspective. It may affect user's experience, as e.g. on Windows he won't be able to remove the open files or the containing directory. Deleting an open file on POSIX systems will succeed, but it will also mean some subsequent log records can be lost as they will be written to the deleted file instead of the new one.
It's definitely not trivial. The right thing to do would probably be to implement a deferred close where a file lingers around for a second or so, then is closed by a background thread. If it's used again within this second, it doesn't need to be closed and reopened.