With atomic data types you can tailor your program to your needs and therefore optimize it. But now we are in the domain of the multithreading experts.
If you don't specify the memory model, the sequential consistency will be used. The sequential consistency guarantees two properties. Each thread executes its instructions in source code order and all threads follow a global order.
std::cout << y.load() << " ";
std::cout << x.load() << std::endl;
This knowledge is suffficient to analyse the program. Because x and y are atomic, the program has no race condition. So there is only the question. What values are possible for x and y? But the question is easy to answer. Because of the sequential consistency, all threads have to follow a global order.
- x.store(2000); happens-before y.store(11);
- std::cout << y.load() << " "; happens-before std::cout << x.load() << std::endl;
Therefor: x.load() can not have 0, if y.load() is 11, because x.store(2000) happens before y.store(11).
All other values for x and y are possible. Here are three possible interleavings, producing the three different results for x and y.
- thread1 will completely executed beforel thread2.
- thread2 will completely executed before thread1.
- thread1 executes the first instruction x.store(2000), before thread2 will be completely executed.
Here all values for x and y.
So how does this look like in CppMem.
atomic_int x= 0;
atomic_int y= 0;
At first a little bit syntax of CppMem. CppMem uses in line 2 and 3 the typedef atomic_int for std::atomic<int>.
If I execute the program, I'm overwhelmed by the sheer number of execution candidates.
384 (1) possible execution candidates, only 6 of them are consistent. No candidate has a data race. How does that work?
But I'm only interested in the consistent executions. I use the interface (2) to analyse the six annotated graphs. The other (378) are not consistent. That means for example, they do not respect the modification order. So I totally ignore them.
We know already, that all values for x and y are possible, except for y= 11 and x= 0. That's because of the default memory model.
Now the questions is. Which interleavings of the threads produces which values for x and y? I already introduces the symbols in the annotated graph (CppMem - An overview), therefore I will concentrate my analysis on the results for x and y.
Execution for (y= 0, x= 0)
Executions for (y= 0, x= 2000)
Execution for (y= 11, x= 2000)
Do you have an idea, why I used the red numbers in the graphs? I have, because I'm not done with my analysis.
If I look at the 6 different interleavings of thread in the following graphic, I have the question? Which sequence of instructions corresponds to which graph? Here is the solution. I have assigned to each sequence of instructions the corresponding graph.
Sequences of instructions
I start with the simpler cases:
- (1): It's quite simple to assign the graph (1) to the sequence (1). In the sequence (1) have x and y the values 0, because y.load() and x.load() are executed before the operations x.store(2000) and y.store(11).
- (6): The argumentation for the execution (6) is accordingly. y has the value 11 and y the value 2000, if all load operations happens after all store operations.
- (2),(3),(4),(5): Now to the more interesting cases, in which y has den value 0 and x has the value 2000. The yellow arrows (sc) are the key for my reasoning, because they stand for the sequence of instructions. For example lets look at execution (2).
- (2): The sequence of the yellow arrows (sc) in the graph (2) is: Write x= 2000 => Read y= 0 => Write y= 11 => Read x= 2000. This sequence corresponds to the sequence of instructions of the second interleaving of threads (2).
In the next post I will break the sequential consistency. So what will happen, if a base my optimization on the acquire-release semantic.
Go to Leanpub/cpplibrary "What every professional C++ programmer should know about the C++ standard library". Get your e-book. Support my blog.