HOME BLOG ARCHIVE TAGS

Lock Pools and Resource Groups to the Rescue

April 11, 2013

Recently, I’ve been playing with lock pools and resource groups. The core idea here is to minimize the serialization costs in multithreaded code (mutex creation/destruction avoidance is a natural “bonus”/side-effect, in particular for short-lived dynamic mutexes).

Using locks to protect shared resources is a typical programming practice (to avoid racing conditions), and they should be held for the shortest possible time (as David Butenhof points out, if you’re holding a lock for too long time, you’re actually preventing the concurrency you were trying to achieve, when started to use threads).

Sometimes, we “play it safe”, imposing a bad lock granularity on data structures/systems. Everybody knows the result: software performance is far from ideal, suffering with big/giant locks (global locks are bottlenecks regularly found when porting complex stuff to other architectures/hardware, and/or developing tricky code).

Finer grained locking can be thought as a data structure “dispersion” exercise. Specially when dealing with simple applications that employ common constructs (such as arrays and linked lists). Instead of using just one mutex to preserve the integrity of a shared resource, we change the resource inner structure, to allow multiple mutexes usage. With more than one lock providing serialization, contention is expected to be reduced (good time for an old mantra: always measure first to assess the optimization need).

One trivial example might help with the concept. Picture a global linked list, used for caching purposes. Concurrent access to this resource’s critical regions are naturally protected by a single lock. Like this:

alt lock pools1

Now, we slightly change the way we approach the caching, with a new design (I know… my drawing skills are precarious):

alt lock pools2

In fact, we have just turned our single global linked list into multiple global linked lists (this is the aforementioned dispersion). After creating groups of shared resources, naturally, multiple locks (as a pool) can be used to protect them.

A good hash function like Murmur (at the time of this writing, MurmurHash3 is the last variant) may be used to build a scheme that maps resource groups and each lock on the pool. With any kind of fine-grained locking architecture, we have to be very careful to avoid deadlocks when using the protected resources.

One may argue that Software Transactional Memory - STM - is a better concurrency control mechanism, well suited as an easier and faster alternative to lock-based synchronization; unfortunately, this is not always the case (one must take legacy compatibility/support, performance on specific environments/runtimes/languages/libs, developer culture, etc., into account).

An interesting take on lock pool subject is FreeBSD’s mtx_pool (a kernel development facility). Its implementation is useful to study a more generic approach to some of the ideas/concepts presented here.