On 12 Sep 2016 at 7:51, eg wrote:
... Windows may take until the next reboot to delete a file, and until then it cannot be opened by anybody.
This is news to me. Do you have any links to documentation on this?
Short answer: as the Win32 docs for DeleteFile() says: "The DeleteFile function marks a file for deletion on close. Therefore, the file deletion does not occur until the last handle to the file is closed [in the system]. Subsequent calls to CreateFile to open the file fail with ERROR_ACCESS_DENIED." https://msdn.microsoft.com/en-us/library/windows/desktop/aa363915%28v= vs.85%29.aspx. [square brackets stuff added by me] Longer answer: the Windows NT kernel was originally the next edition of the VAX VMS kernel, and was considered by many to have a set of superior, if more conservative, design choices to the Unixes of the day which ended up becoming POSIX. Many of the NT kernel APIs are very similar to those of VMS as a result. One interesting feature of the VMS filesystem was that when you deleted a file, the system went off and securely scrubbed the contents before doing the deletion, a process which could take considerable time. NTFS and Windows NT inherited that behaviour, and it was furthermore considered valuable for secure by design code to lock out use of a file being deleted because it makes inode swapping tricks and other such security holes on POSIX systems impossible on VMS and NT systems. NT absolutely allows you to explicitly opt into POSIX semantics by renaming a file before deletion as AFIO does, this leads to default more secure semantics than the POSIX default behaviour. Ultimately of course the ship has sailed, and POSIX is now the standard. NT reflects a continuing objection to many design failures in POSIX especially around the filesystem where POSIX has many deeply flawed design decisions. As a result of the above behaviours, unfortunately the lion's share of code out there written for Windows which deals with the filesystem is simply wrong. It just happens to work most of the time and people are too ignorant and/or don't care that it is racy and will cause misoperation for some users. A lot of big famous open source projects indeed refuse to fix incorrect code after a bug report because they just don't believe it's a problem, mainly through not fully understanding what files that can't be unlinked means for correct filesystem code design. There is a group of dedicated AFIO repo followers who have tried logging bugs about these bad design patterns with various projects, and it's been quite amazing how little people care that their code will always fail under the right conditions. But in the end the file system has always been considered as unchanging when programmers write code for it, thus leading to a large number of race bugs and security holes caused by unconsidered concurrent third party induced changes. AFIO is intended to help programmers to avoid these sorts of issue more easily than is possible with just the Boost and standard C++ library facilities where it is currently quite hard to write completely bug free filesystem code without resorting to proprietary APIs. Those wanting lots more detail may find my conference presentations worth watching: 20150924 CppCon Racing the File System Workshop https://www.youtube.com/watch?v=uhRWMGBjlO8 20160421 ACCU Distributed Mutual Exclusion using Proposed Boost.AFIO https://www.youtube.com/watch?v=elegewDwm64 The third in the series is coming next week at CppCon. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/