On 25 Aug 2015 at 8:22, Thomas Heller wrote:
As far as I can see, there is not a single function documented that
returns anything than some equivalent to "shared_future".
The synchronous single shot functions return an afio::handle_ptr or
some type appropriate for that function (e.g. enumerate() returns a
vector of directory_entry).
The asynchronous single shot functions return an afio::future<T>
where T is usually void. afio::future<T>.get_handle() returns an
afio::handle_ptr, while afio::future<T>.get() returns whatever T is
(which might be void).
The batch functions return vectors of afio::future<T>.
Ahh i missed the enumerate (btw, the sync enumerate example only shows
the usage of the async version...) and stat functions (and maybe some
others). The question remains:
Why do they also need to return a handle?
[snip]
Why does it need to be shared_ptr<handle> in the first place? Why not
just handle? A shared_ptr always implies shared ownership, when for
example a boost::afio::file function returns, with whom is the ownership
shared?
[snip]
It is misdesigned in the way that it implies shared ownership from the
start on, that is something that should be left to the user.
[snip]
That's actually a very strange statement. shared_ptr implies shared
ownership, aka some shared state. Sharing state requires synchronization
to protect it from races. And yes, if you modify this shared from two
threads without synchronization, you accidentally introduced a race.
[snip]
That doesn't make a lot of sense ... Of course, if your state is not
shared, you don't introduce races, but if you don't have a shared state,
why use shared_ptr?
I think Thomas you have missed something very fundamental about the
AFIO design. I hope you don't mind me not replying to the rest of it
(which was useful) and I'll just dig into the above as others may be
where you are.
1. An afio::future<> is *always* a future handle to a file. You asked
why all functions always return a future handle to a file instead of
just the functions which open a file. I'm getting the feeling you
didn't understand that:
future> async_enumerate(future<>
precondition)
... expands, quite literally as you'll see in afio.hpp, into:
return precondition.then(detail::async_enumerate());
... which is a Concurrency TS continuation. detail::async_enumerate
might then be effectively implemented as:
struct async_enumerate
{
future> operator()(const future<>
&precondition) const
{
return await
do_async_directory_enumeration(precondition.get_handle());
}
};
In other words, when the future precondition becomes ready, it will
execute the continuation which calls the call operator on the
instance of detail::async_enumerate. That call operator will schedule
(via await) the directory enumeration of the file handle just become
ready in the future precondition.
This is why afio::future<> *always* is a future to a handle to a
file. Does this make sense?
2. The reason why it's shared_ptr<handle> not handle * is precisely
because of how to manage lifetime over unknown continuations. If the
user writes:
future<> oh=async_file("niall.txt");
future<> reads[100];
for(size_t n=0; n<100; n++)
reads[n]=async_read(oh, buffers[n], 10, n*4096);
So this schedules niall.txt to be opened, and then adds 100
continuations onto that file open each of which schedules the read of
10 bytes from a 4Kb multiple. All those 100 continuations occur in
*parallel*, and could complete in *any* order.
So how do you keep the handle to the niall.txt file around until the
last continuation using that file has completed which is at some,
unknown point in the future? The natural fit is shared_ptr. It's
exactly what it was designed for.
Does this make sense?
3. You have stated that you dislike the idea of afio::future<T>
having shared future semantics for the future handle. But can you see
that it is not possible to schedule 100 parallel data read
continuations onto a file open unless the handle has shared future
semantics?
After all, if it had future semantics, you could only schedule
exactly one read to each file open because then you would consume the
future. It would not be possible to schedule parallel operations on
any single preceding operation!
Perhaps I am not understanding your concerns correctly?
Niall
--
ned Productions Limited Consulting
http://www.nedproductions.biz/
http://ie.linkedin.com/in/nialldouglas/