One of the first things that we learn while studying Operating Systems is that the way concurrent threads access the same memory regions is by acquiring locks. Read requests acquire a “shared” lock while write requests get an “exclusive lock”.

However, these methods are highly pessimistic. i.e. we are trying to be immensely careful so that our transactions don’t conflict. The overhead due to lock management itself adds an overhead of 17%. In case our workload is heavily read-oriented, locking is a futile exercise.

Here is an optimistic way to do things. Do the transaction and then check if there are conflicts! Most transactions do not have conflicting read/write sets. It also turns out that verifying conflicting transactions can be done in parallel, (also writing can be done in parallel for non-conflicting transactions). The idea extremely simple yet brilliant is used in almost all big systems in the industry currently! Of course, it will only give a speedup when the time taken to verify conflicts is lesser than lock management + deadlock resolution.

[Original Paper from 1981] (https://lnkd.in/fqW886B)