Latches And Barriers

Contents[Show]

Latches and barriers are simple thread synchronisation mechanism which enables it that some threads wait until a counter becomes zero. We will presumably get in C++20 latches and barriers in three variations: std::latch, std::barrier, and std::flex_barrier.

 At first, there are two questions:

  1. What are the differences between these three mechanisms to synchronise threads? You can use a std::latch only once but you can use a std::barrier and a std::flex_barrier more than once. Additionally,  a std::flex_barrier enables you to execute a function when the counter becomes zero.
  2. What use cases do latches and barriers support that can not be done in C++11 and C++14 with futures, threads, or condition variables in combinations with locks? Latches and barriers provide no new use cases but they are a lot easier to use. They are also more performant because they often use internally a lock-free mechanism.

Now, I will have a closer look at the three coordination mechanism. 

std::latch

std::latch is a counter that counts down. Its value is set in the constructor. A thread can decrement the counter by using the method thread.count_down_and_wait and wait until the counter becomes zero. In addition, the method thread.count_down only decrease the counter by 1 without waiting. std::latch has further the method thread.is_ready in order to test if the counter is zero and the method thread.wait to wait until the counter becomes zero. You have no possibility to increment or reset the counter of a std::latch hence you can not reuse it. 

For further details to std::latch read the documentation on cppreference.com.

Here is a short code snippet from the proposal n4204.

 

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
void DoWork(threadpool* pool) {
    latch completion_latch(NTASKS);
    for (int i = 0; i < NTASKS; ++i) {
      pool->add_task([&] {
        // perform work
        ...
        completion_latch.count_down();
      }));
    }
    // Block until work is done
    completion_latch.wait();
  }

 

I set the std::latch completion_latch in its constructor to NTASKS (line 2). The thread pool executes NTASKS (line 4 - 8).  At the end of each task (line 7), the counter will be decreased.Line 11 is the barrier for the thread running the function DoWork and hence for the small workflow. This thread has to wait until all tasks have been done.

 

The proposal uses a vector<thread*> and pushes the dynamically allocated threads onto the vector workers.push_back(new thread([&] {. That is a memory leak. Instead, you should put the threads into a std::unique_ptr or directly create them in the vector: workers.emplace_back[&]{ . This observations holds for the example to the std::barrier and the std::flex_barrier.

std::barrier

A std::barrier is quite similar to a std::latch. The subtle difference is that you can use a  std::barrier more than once because the counter will be reset to is previous value. Immediately after the counter becomes zero the so-called completion phase starts. This completion phase is in the case of a std::barrier empty. That changes wit a  std::flex_barrier. std::barrier has two interesting methods: std::arrive_and_wait and std::arrive_and_drop. While std::arrive_and_wait  is waiting at the synchronization point, std::arrive_and_drop removes itself from the synchronization mechanism.

Before I take a closer look at the std::flex_barrier and the completion phase, I will give a short example to the std::barrier.

 

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
void DoWork() {
    Tasks& tasks;
    int n_threads;
    vector<thread*> workers;

    barrier task_barrier(n_threads);

    for (int i = 0; i < n_threads; ++i) {
      workers.push_back(new thread([&] {
        bool active = true;
        while(active) {
          Task task = tasks.get();
          // perform task
          ...
          task_barrier.arrive_and_wait();
         }
       });
    }
    // Read each stage of the task until all stages are complete.
    while (!finished()) {
      GetNextStage(tasks);
    }
  }

 

The std::barrier barrier in line 6 is used to coordinate a number of threads that performs its tasks a few times. The number of threads is n_threads (line 3). Each thread takes its task at line 12 via task.get(), performs it and waits - as far as it is done with its task(line 15) - until all threads have done their tasks. After that, it takes a new task in line 12 as far as active returns true in line 12.

std::flex_barrier

From my perspective, the names in the example to the std::flex_barrier are a little bit confusing. For example, the std::flex_barrier is called notifying_barrier. Therefore I used the name std::flex_barrier.

The std::flex_barrier has in contrast to the std::barrier an additionally constructor. This constructor can be parametrised by a callable unit that will be invoked in the completion phase. The callable unit has to return a number. This number sets the value of the counter in the completion phase. A number of -1 means that the counter keeps the same in the next iteration. Smaller numbers than -1 are not allowed. 

What is happening in the completion phase?

  1. All threads are blocked.
  2. A thread is unblocked and executes the callable unit.
  3. If the completion phase is done, all threads will be unblocked.

The code snippet shows the usage of a std::flex_barrier.

 

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
 void DoWork() {
    Tasks& tasks;
    int initial_threads;
    atomic<int> current_threads(initial_threads);
    vector<thread*> workers;

    // Create a flex_barrier, and set a lambda that will be
    // invoked every time the barrier counts down. If one or more
    // active threads have completed, reduce the number of threads.
    std::function rf = [&] { return current_threads;};
    flex_barrier task_barrier(n_threads, rf);

    for (int i = 0; i < n_threads; ++i) {
      workers.push_back(new thread([&] {
        bool active = true;
        while(active) {
          Task task = tasks.get();
          // perform task
          ...
          if (finished(task)) {
            current_threads--;
            active = false;
          }
          task_barrier.arrive_and_wait();
         }
       });
    }

    // Read each stage of the task until all stages are complete.
    while (!finished()) {
      GetNextStage(tasks);
    }
  }

 

The examples follow a similar strategy as the example to std::barrier. The difference is that this time the counter of the std::flex_barrier is adjusted during run time. Therefore the std::flex_barrier task_barrier in line 11 gets a lambda-function. This lambda-function captures its variable current_thread by reference. The variable will be decremented in line 21 and active will be set to false if the thread has done its task. Therefore the counter is decreased in the completion phase.

 A std::flex_barrier has one speciality  in contrary to a std::barrier and a std::latch. That's the only one for which you can increase the counter.

 

Read the details to std::latch, std::barrier , and std::flex_barrier at cppreference.com.

What's next?

Coroutines are generalised functions that can be suspended and resumed while keeping their state. They are often used to implement cooperative tasks in operating systems, event loops in event systems, infinite lists, or pipelines. You can read the details about coroutines in the next post.

 

 

 

 

 

 

 

title page smalltitle page small Go to Leanpub/cpplibrary "What every professional C++ programmer should know about the C++ standard library".   Get your e-book. Support my blog.

Tags: C++20

Comments   

0 #1 Azeem 2017-02-27 14:17
Nice article!
Addition of some diagram (as shown in CppCon presentations) might add value to this. :-)

There are some typos in English.
A semicolon is missing the last code snippet (line # 4).
Quote
0 #2 KARTHIKEYAN VASUKI B 2017-02-27 16:52
great book!
Quote
0 #3 Ralf 2017-03-13 14:29
Thanks for any other excellent article. The place else may anybody get that type of information in such a perfect eans of writing?
I have a presentation next week, and I'm at the search for such information.
Quote

Add comment


My Newest E-Book

Latest comments

Subscribe to the newsletter (+ pdf bundle)

Blog archive

Source Code

Visitors

Today 424

All 252655

Currently are 238 guests and no members online