Nicholas Neumann wrote:
Thinking a little bit differently, the multifile backend could allow for a client- supplied callback, run after each record consume, which tells the backend whether to close the file or leave it open in an unbounded cache.
If you're going to use a client-supplied callback, the most straightforward one would just return a shared_ptr<FILE> (or the appropriate equivalent). Then both opening and closing are under the callback's control. E.g. the simple one would just do shared_ptr<FILE> callback( char const* fn ) { if( FILE* f = std::fopen( fn, "rb" ) ) return shared_ptr<FILE>( f, std::fclose ); else return shared_ptr<FILE>; } whereas a simple caching one would instead do shared_ptr<FILE> callback( char const* fn ) { if( s_cache.count( fn ) ) { return s_cache[ fn ]; } else if( FILE* f = std::fopen( fn, "rb" ) ) { shared_ptr<FILE> sf( f, std::fclose ); s_cache[ fn ] = sf; return sf; } else return shared_ptr<FILE>; } (mutex locks protecting s_cache omitted for brevity) and a more sophisticated cache would have a flush strategy.