If you’ve done any amount of threaded programming, you’ve probably run across code that looked like:
// Only accessed with the mutex held.
uint32_t mFlags;
bool mConnected;
nsTArray<int32_t> mData;
// Only called with the mutex held.
void DoSomething();
Perhaps you’ve even gotten to debug code which inadvertently violated the locking requirements of the members.
Several months ago, I reviewed a patch by David Keeler that addressed the second half of the above example. Methods that had locking requirements looked like:
void DoSomething(MutexAutoLock& aProofOfLock);
which ensures (at compile time!) that you can’t call the function without locking the mutex first. I thought this was a nice technique, said as much in my review, and have been looking for places to apply it ever since.
The explicitness and the requirement to constantly pass MutexAutoLock&
variables around is a feature, not a bug. Doing so encourages you to limit the amount of code that needs to be executed with locks held, keeping the concurrent parts of the code and the synchronized parts of the code clearly delimited. In this respect, this technique is similar to the IO monad in Haskell. I don’t know whether the extra verbosity would make the code more difficult to read or not, especially if the techniques for guarding members suggested below were applied as well.
This coding style also came in handy a week or so ago investigating overly high CPU usage when playing YouTube videos. Our event queue for nsIRunnable
s did its own locking internally, and exposed its internal reentrant monitor “for power users”. This led to code like:
{
ReentrantMonitorAutoEnter mon(mEvents.GetReentrantMonitor());
...
mEvents.PutEvent(event);
}
where the PutEvent
call would do an extra round of locking (“just to be sure”), which was wasted work. Data structures like this doing their own locking internally typically isn’t a great idea, so part of the work in the above bug was to separate the locking requirements of the queue from who actually needs to do the locking. Or, in other words, we can have the class that owns the event queue do the locking, and have the event queue’s methods enforce the locking programmatically:
{
MonitorAutoLock mon(mMonitor);
...
mEvents.PutEvent(event, mon);
}
Now there’s no wasted work in PutEvent
, because it already knows the appropriate locking has been done. The ability to use non-reentrant monitors—which are more efficient—was a nice bonus resulting from the separation of concerns here.
This technique can also help solve the first half of the problem we presented at the beginning of this post: ensuring members are only accessed with locks held.
template<typename T>
class Guarded;
template<>
class Guarded<uint32_t>
{
public:
Guarded() : mValue(0) {}
uint32_t Value(MutexAutoLock& aProofOfLock)
{
return mValue;
}
void Assign(MutexAutoLock& aProofOfLock, uint32_t aNewValue)
{
mValue = aNewValue;
}
// Since accesses should only be done under the lock, and copying
// and moving would therefore require locks, we require the user
// to ensure those constraints are met with explicit calls to the
// above rather than the compiler sneaking unlocked accesses in.
Guarded(const Guarded&) = delete;
...
private:
uint32_t mValue;
};
The above class isn’t production quality code; it’s intended to sketch out how explicit locking requirements might work. A more robust version might require a Mutex&
reference to be passed to the constructor, and member functions assert that the MutexAutoLock&
parameters actually lock the specified mutex. Specializing for each of the integer types would also get tiresome, so we’d need to do a better job there. Handling types with methods could be done with something like the following, I think:
template<typename T>
class GuardedAggregate
{
public:
GuardedAggregate() : mValue() {}
// The core idea here is that the user would write:
//
// GuardedAggregrate<nsTArray> mArray;
//
// and then accesses would be done via:
//
// mArray.Value(lock).Method(...);
//
// This means that we don't have to proxy every single one of
// the aggregate's methods, but the locking requirements are
// still explicit.
class Proxy
{
public:
Proxy(MutexAutoLock& aProofOfLock, T* aValue) : mValue(aValue)
{}
T* operator->()
{
return mValue;
}
private:
T* mValue;
};
Proxy Value(MutexAutoLock& aProofOfLock)
{
return Proxy(aProofOfLock, &mValue);
}
...
private:
T mValue;
};
This can also be though of as a compiler-independent, but less flexible version of clang’s Thread Safety Analysis. Folks have been asking about bringing the annotations that analysis requires into Gecko; I wonder if it might work just as well to apply this technique more liberally throughout the codebase.