On Tue, Sep 1, 2015 at 7:36 PM, Niall Douglas
This is a complex and lengthy answer if answered in full, and is a large part of the AFIO value add. The housekeeping done during handle open is used throughout AFIO to skip work in later operations under the assumption that handle opens are not common.
Limiting the discussion to just the problem of race free filesystem on POSIX, imagine the problem of opening the sibling of a file in a directory whose path is constantly changing. POSIX does not provide an API for opening the parent directory of an open file descriptor. To work around this, you must first get the canonical current path of the open file descriptor, strip off the end, open that directory and then use that as a base directory for opening a file with the same leafname as your original file. You then loop all that if you fail to open that leafname, or the leafname opened has a different inode. Once you have the correct parent directory, you can open the sibling. This is an example of where caching the stat_t of handle during open saves syscalls and branches in more performance important APIs later on.
The normal way to do this with POSIX *at APIs would be to just open a handle to the directory in the first place. I suppose the purpose of this more complex approach is to avoid having to keep an extra file descriptor to the directory open, or to allow the user to open a sibling file from an arbitrary AFIO file handle without preparing in advance (as I suppose would be required by your shadow file-based locking approach). It does seem like rather specific and not necessarily all that common functionality to require users to pay for by default.
A similar problem exists for race free file deletions and a long list of other scenarios. The cause is a number of defects in the POSIX race free APIs. The Austin Working Group are aware of the problem. Windows doesn't have problems with race free siblings and deletions due to a much better thought through race free API, but it does have other problems with deletions not being actual deletions and different workarounds are needed there.
Personally, I would prefer an API that lets me pay only for what I need. You could expose the low-level platform-specific behavior but also provide higher-level operations that have added complexity to avoid races or emulate behavior not provided natively.
You can disable the race free semantics, which are slow, using
afio::file_flags::no_race_protection. However AFIO fundamentally assumes that any file system path consuming function is slow and will not be frequently called by applications expecting high performance, so work is deliberately loaded into path consuming functions to keep it away from all other functions.
If you have a need to do a lot of file opens and closes and don't care about races on the file system, you are better off not using AFIO. STL iostreams is perfectly good for this as the only OSs which allow async file opens and closes is QNX and the Hurd.
There is certainly a large gap between what is possible with STL iostreams or Boost filesystem and what is possible using platform-specific APIs for file system manipulations. While it is obviously your choice what scope to pick, I think it may be possible for it to be usable for everything you intend to do with it, e.g. for writing a database backend, but also to be much more widely useful.
It always easy to say "it should do everything optimally". If I had more resources I could do much better than I intend to do. As it stands, without sponsorship you get just 350-400 hours per year, that's just two months full time equivalent. It's very limiting. You have to rationalise.
I certainly understand. I'm just trying to convey what I"d like to see in a C++ filesystem API.
Stability with respect to what? Do you expect native handles to be somehow different on the same platform?
AFIO's design allows many backends e.g. ZIP archives, HTTP etc. So yes, native handles can vary on the same platform and I needed to choose one for the ABI.
Perhaps you could provide a less generic API for the native platform filesystem, and then once you also have support for ZIP archives, etc. create a more generic interface on top. While archive files and different network protocols like WebDAV and FTP can certainly be presented as file systems, particularly for read-only access, and indeed it is often possible to access them as file systems through the native platform APIs, they tend to differ sufficiently from file native systems that a different API may be more suitable.
I think this is another storm in a teacup. Dropping generic filesystem backends just because native_handle() doesn't return an int? Seems way overkill.
Generic filesystem backends could let you do loopback mounts and a long list of value add scenarios. I would consider them a vital design point.
I suspect that there may be a relatively easy solution to the particular problem of type safety for native_handle(), e.g. by exposing a platform-specific handle type, but also having some type erasure. I agree that having a generic interface to filesystem-like things is potentially useful. However, I think this is quite a tricky thing, and it is difficult to define such an interface without looking at the capabilities of all of the backends you'd like to support and what functionality different applications would like to use. For instance, some backends might only support reading/writing entire files at once. Some might not allow random access. Some might only allow appending new files. Symlinks, hardlinks, file permissions, modes, flags and other metadata may not be supported or may operate completely differently. (Even if e.g. POSIX file owner/group and mode are supported, the semantics will in general be completely different than for a local filesystem.) Using Boost.Filesystem path doesn't really make sense for anything other than the native OS filesystem APIs. Indeed there are numerous existing libraries/systems that try to do this, and even ways to expose arbitrary things as filesystems to the operating system, e.g. FUSE on Linux, so that individual programs don't need to concern themselves with it. However, there tend to be drawbacks to trying to make these pseudo-filesystems appear as regular filesystems.
Also correct. Delayed allocation means the file system may only try to actually allocate storage on a first write into that extent, and that might fail. This would then blow up with a fatal app exit thanks to AFIO's current implementation. As I said, I am very aware of this, I just needed lightweight futures done before I could start the ASIO reactor replacement.
It isn't clear to me why it would be particularly hard to expose this as a regular error rather than a fatal one, but it sounds like you are planning to fix it anyway.