On 5 déc, 06:45, grbgooglefan <ganeshbo...@xxxxxxxxx> wrote:
> My application uses the caching heavily to store the data from
> databases & also the runtime orders information.
> All these caches are built on STL hash map, vectors & maps and data
> format is the structures.
> There are multiple threads accessing these caches simultaneously for
> reading the data as well as updating the data.
> Whenever any thread accesses the cache, it locks that cache & finds
> the required element. Does the actions required & unlocks the cache.
> I am finding that this is causing the other threads to wait for longer
> time because the locking is at cache level.
> I would like to make this locking quite finer & granular, in such a
> way that only the single structure which is to be updated is locked.
> This is somewhat same as the row level locking in databases.
> I am using the hash map, etc from default available STL library on
> Linux (libstdc++). I am using them as memory cache for server
> Most of these caches are built using the string as key and the
> structure as data. Flow of functions happens somewhat as below in the
> Thread 1 gets data from database using select query. Populates that
> data in the structures & pushes that structure on the cache.
> Thread 2 then picks up that structure & uses it for getting other live
> real time data from other service. Once that real time data is
> available, this Thread 2 keeps on updating that data in the structures
> in the cache.
> There Thread 3 which reads the data from this cache & uses for
> processing the requests from the client application.
> So all types of data structure operations - insertions, erasing,
> updating, reading are happening on this cache with the strctures.
> The cache holds more than 60000 data elements (structures) & searching
> for each required structure takes long time. So when one thread is
> trying to get the required structure, others have to wait & cannot do
> any productive work. Because they also need data from the same cache
> (even though for different type of actions).
> How do we achieve this level of granular locking?
Why don't you use a RW locking policy (many simultaneous readers, one
writer)? BTW, I don't find 60000 data elements being so large, so
maybe the bottleneck is somewhere else (DB query?).