On 6/18/22 15:44, Phil Endecott via Boost wrote:
Andrey Semashev wrote:
On 6/17/22 21:23, Phil Endecott via Boost wrote:
Dear Experts,
boost::upgrade_mutex m;
void f() { ? boost::upgrade_lock l(m); ? // read shared state ? g(); }
void g() { ? // precondition: upgrade lock held. ? boost::upgrade_to_unique_lock l(m); ? // write shared state }
But that's not how it works. The upgrade_to_unique_lock takes the upgrade_lock as its ctor parameter, not the mutex.
Is there a good reason for that? I think all that upgrade_to_unique_lock needs to do is to call m.unlock_upgrade_and_lock() in its ctor and m.unlock_and_lock_upgrade() in its dotr, i.e. it doesn't need to access any state in the upgrade_lock.
Am I missing something?
It is needed to enforce the contract: in order to upgrade to unique lock you must first obtain the upgradeable ownership[1] via upgrade_lock.
Sure. But why does it feel the need to enforce this particular contract, rather than trusting that the caller has met the precondition?
A good API does not allow for incorrect use. If you want unprotected low-level access, you can always call the mutex methods directly, but then all bets are off.
In any case, it seems that in practice it is able to check that the mutex is upgrade-locked with an assert, without the need to pass the upgrade_lock:
boost::upgrade_mutex m; // no m.lock_upgrade() here m.unlock_upgrade_and_lock(); // assertion failure
In general, you cannot efficiently implement such an assert, unless the mutex exclusively locks or busy loops internally. That is, you must perform the assert and upgrade the ownership atomically. If the mutex does not lock internally, it will not be able to assert until after the atomic RMW is done, which might have already damaged the mutex state. In any case, preventing incorrect code from being written is always better than a runtime check.
My concern with this API design is that it changes where I have to declare the locks, i.e. currently I always declare mutexes near the "wider scope" data that they protect and (strict) scoped locks in the "narrow scope" where the access occurs:
// global scope: data_t data; mutex data_mut;
void f() { lock l(data_mut); access(data); }
With the upgrade_to_unique_lock design it's more complicated. I think I need to put a long-lived but deferred upgrade_lock in the same scope as the mutex, and then lock-the-lock:
// global scope: data_t data; upgrade_mutex data_mut; upgrade_lock data_upgrade_lock(data_mut, defer_lock);
void read_func() { some_lock_type l(data_upgrade_lock); // locking the lock, not the mutex!? read(data); if (foo) write_func(); }
void write_func() { upgrade_to_unique_lock l(data_upgrade_lock); mutate(data); }
Now that's not too much extra code to write, but note that it does not guarantee that I have the upgrade_lock when I try to upgrade - I can call write_func() without having locked the (non-strict) upgrade_lock. So it is not helped with enforcing the precondition (at compile time, anyway).
upgrade_lock is intended to be created when you need to lock the mutex, just as with any other lock type. You can create it as a member, but it's not what I'd call a good design as it doesn't express the requirement that it is locked when upgrade_to_unique_lock is constructed. I would pass a reference to upgrade_lock as an argument to every function that expects it to be locked and wants to upgrade it. void read_func() { upgrade_lock l(data_mut); read(data); if (foo) write_func(l); } void write_func(upgrade_lock& l) { upgrade_to_unique_lock ll(l); mutate(data); } This API is self-documenting, you won't be calling write_func without having an upgrade_lock.