The key idea of a std::atomic_thread_fence is, to establish synchronisation and ordering constraints between threads without an atomic operation.
std::atomic_thread_fence are simply called fences or memory barriers. So you get immediately the idea, what a std::atomic_thread_fence is all about.
A std::atomic_thread_fence prevents, that specific operations can overcome a memory barrier.
But what does that mean? Specific operations, which can not overcome a memory barrier. What kind of operations? From a bird's perspective, we have two kinds of operations: Read and write or load and store. So the expression if(resultRead) return result is a load, followed by a store operation.
There are four different ways to combine load and store operations:
- LoadLoad: A load followed by a load.
- LoadStore: A load followed by a store.
- StoreLoad: A store followed by a load.
- StoreStore: A store followed by a store.
Of course, there are more complex operations, consisting of a load and store part (count++). But these operations didn't contradict my general classification.
But what's about memory barriers?. In case you place memory barriers between two operations like LoadLoad, LoadStore, StoreLoad or StoreStore, you have the guarantee, that specific LoadLoad, LoadStore, StoreLoad or StoreStore operations can not be reordered. The risk of reordering is always given if non-atomics or atomics with relaxed semantic are used.
Typically, three kinds of memory barriers are used. They are called full fence, acquire fence and release fence. Only in order to remind you. Acquire is a load, release is a store operation. So, what's happening if I place one of the three memory barriers between the four combinations of load and store operations?
- Full fence: A full fence std::atomic_thread_fence() between two arbitrary operations prevents the reordering of these operations. But that guarantee will not hold for StoreLoad operations. They can be reordered.
- Acquire fence: An acquire fence std:.atomic_thread_fence(std::memory_order_acquire) prevents, that a read operation before an acquire fence can be reordered with a read or write operation after the acquire fence.
- Release fence: A release fence std::memory_thread_fence(std::memory_order_release) prevents, that a read or write operation before a release fence can be reordered with a write operation after a release fence.
I admit, that I invested a lot of energy to get the definitions of an acquire and release fence and there consequences for lock-free programming. Especially the subtle difference to the acquire-release semantic of atomic operations are not so easy to get. But, before I come to that point, I will illustrate the definitions with graphics.
Memory barriers illustrated
Which kind of operations can overcome a memory barrier? Have a look at the three following graphics. If the arrow is crossed with a red bar, the fence prevents this kind of operation.
Of course, you can explicitly write instead of std::atomic_thread_fence() std::atomic_thread_fence(std::memory_order_seq_cst). Per default, sequential consistency is used for fences. Is sequential consistency used for a full fence, the std::atomic_thread_fence follows a global order.
But I can depict the three memory barriers even more concise.
Memory barriers at a glance
That was the theory. The practice will follow in the next post. In this post, I compare in the first step an acquire fence with an acquire operation, a release fence with a release operation. In the second step, I port a producer consumer scenario with acquire release operations to fences.
Go to Leanpub/cpplibrary "What every professional C++ programmer should know about the C++ standard library". Get your e-book. Support my blog.