In addition to booleans, there is atomics for pointers, integrals, and user-defined types. The rules for user-defined types are special.

Both. The atomic wrapper on a pointer T* std::atomic<T*> or on an integral type integ std::atomic<integ> enables the CAS (compare-and-swap) operations.


The atomic pointer std::atomic<T*> behaves like a plain pointer T*. So std::atomic<T*> supports pointer arithmetic and pre-and post-increment or pre-and post-decrement operations. Have a look at the short example.

int intArray[5];
std::atomic<int*> p(intArray);
assert(p.load() == &intArray[1]);
assert(p.load() == &intArray[2]);
assert(p.load() == &intArray[1]);

std::atomic<integral type>

In C++11, there are atomic types to the known integral data types. As ever, you can read the whole stuff about atomic integral data types - including their operations - on the page A std::atomic<integral type> allows all that a std::atomic_flag or a std::atomic<bool> is capable of, but even more.

The composite assignment operators +=, -=, &=, |= and ^= and there pedants std::atomic<>::fetch_add(), std::atomic<>::fetch_sub(), std::atomic<>::fetch_and(), std::atomic<>::fetch_or() and std::atomic<>::fetch_xor() are the most interesting ones. There is a little difference in the atomic read and write operations. The composite assignment operators return the new value, and the fetch variations the old value. A deeper look gives more insight. There is no multiplication, division, and shift operation in an atomic way. But that is not that big a restriction because these operations are relatively seldom needed and can easily be implemented. How? Look at the example.

// fetch_mult.cpp

#include <atomic>
#include <iostream>

template <typename T>
T fetch_mult(std::atomic<T>& shared, T mult){
  T oldValue= shared.load();
  while (!shared.compare_exchange_strong(oldValue, oldValue * mult));
  return oldValue;

int main(){
  std::atomic<int> myInt{5};
  std::cout << myInt << std::endl;          
  std::cout << myInt << std::endl;         


I should mention one point. The addition in line 9 will only happen if the relation oldValue == shared holds. So to be sure that the multiplication will always take place, I put the multiplication in a while loop. The result of the program is not so thrilling.

fetch mult

The implementation of the function template fetch_mult is generic, too generic. So you can use it with an arbitrary type. In case I use instead of the number 5 the C-String 5, the Microsoft compiler complains that the call is ambiguous.

fetch mult error

"5" can be interpreted as a const char* or an int. That was not my intention. The template argument should be an integral type. The correct use case for concepts lite. With concepts lite, you can express constraints to the template parameter. Sad to say, but they will not be part of C++17. We should hope for the C++20 standard.

template <typename T>
  requires std::is_integral<T>::value
T fetch_mult(std::atomic<T>& shared, T mult){
  T oldValue= shared.load();
  while (!shared.compare_exchange_strong(oldValue, oldValue * mult));
  return oldValue;


The predicate std::is_integral<T>::value will be evaluated by the compiler. If T is not an integral type, the compiler will complain. std::is_integral is a function of the new type-traits library, part of C++11. The required condition in line 2 defines the constraints on the template parameter. The compiler checks the contract at compile time.

You can define your atomic types.


Rainer D 6 P2 540x540Modernes C++ Mentoring

Be part of my mentoring programs:





Do you want to stay informed about my mentoring programs: Subscribe via E-Mail.

std::atomic<user defined type>

There are a lot of severe restrictions on a user-defined type to get an atomic type std::atomic<MyType>. These restrictions are on the type but on the available operations that std::atomic<MyType> can perform.

For MyType, there are the following restrictions:

  • The copy assignment operator for MyType must be trivial for all base classes of MyType and all non-static members of MyType. Only an automatic by the compiler-generated copy assignment operator is trivial. To say it the other way around. User-defined copy assignment operators are not trivial.
  • MyType must not have virtual methods or base classes.
  • MyType must be bitwise comparable so that the C functions memcpy or memcmp can be applied.

You can check the constraints on MyType with the functions std::is_trivially_copy_constructible, std::is_polymorphic, and std::is_trivial at compile time. All the functions are part of the type-traits library.

Only a reduced set of operations is supported for the user-defined type, std::atomic<MyType>.

Atomic operations

To get the great picture, I displayed the atomic operations dependent on the atomic type in the following table.


Free atomic functions and smart pointers

The functionality of the class templates std::atomic and the Flag std::atomic_flag can be used as a free function. Because the free functions use atomic pointers instead of references, they are compatible with C. The atomic free functions support the same types as the class template std::atomic but in addition to that the smart pointer std::shared_ptr. That is special because std::shared_ptr is not an atomic data type. The C++ committee recognized the necessity that instances of smart pointers that maintain the reference counters and objects under their hood must be modifiable in an atomic way.

std::shared_ptr<MyData> p;
std::shared_ptr<MyData> p2= std::atomic_load(&p);
std::shared_ptr<MyData> p3(new MyData);
std::atomic_store(&p, p3);


To be clear. The atomic characteristic will only hold for the reference counter but not the object. That was the reason we get a std::atomic_shared_ptr in the future (I'm not sure if the future is called C++17 or C++20. I was often wrong in the past.), which is based on a std::shared_ptr and guarantees the atomicity of the underlying object. That will also hold for std::weak_ptr. std::weak_ptr, which is a temporary resource owner, helps break the cyclic dependencies of std::shared_ptr. The name of the new atomic std::weak_ptr will be std::atomic_weak_ptr. To complete the picture, the atomic version of std::unique_ptr is called std::atomic_unique_ptr. 

What's next?

Now the foundations of the atomic data types are laid. In the next post, I will write about the synchronization and ordering constraints on atomics.


Thanks a lot to my Patreon Supporters: Matt Braun, Roman Postanciuc, Tobias Zindl, G Prvulovic, Reinhold Dröge, Abernitzke, Frank Grimm, Sakib, Broeserl, António Pina, Sergey Agafyin, Андрей Бурмистров, Jake, GS, Lawton Shoemake, Animus24, Jozo Leko, John Breland, Venkat Nandam, Jose Francisco, Douglas Tinkham, Kuchlong Kuchlong, Robert Blanch, Truels Wissneth, Kris Kafka, Mario Luoni, Friedrich Huber, lennonli, Pramod Tikare Muralidhara, Peter Ware, Daniel Hufschläger, Alessandro Pezzato, Bob Perry, Satish Vangipuram, Andi Ireland, Richard Ohnemus, Michael Dunsky, Leo Goodstadt, John Wiederhirn, Yacob Cohen-Arazi, Florian Tischler, Robin Furness, Michael Young, Holger Detering, Bernd Mühlhaus, Matthieu Bolt, Stephen Kelley, Kyle Dean, Tusar Palauri, Dmitry Farberov, Juan Dent, George Liao, Daniel Ceperley, Jon T Hess, Stephen Totten, Wolfgang Fütterer, Matthias Grün, Phillip Diekmann, Ben Atakora, Ann Shatoff, and Rob North.


Thanks, in particular, to Jon Hess, Lakshman, Christian Wittenhorst, Sherhy Pyton, Dendi Suhubdy, Sudhakar Belagurusamy, Richard Sargeant, Rusty Fleming, John Nebel, Mipko, Alicja Kaminska, and Slavko Radman.



My special thanks to Embarcadero CBUIDER STUDIO FINAL ICONS 1024 Small


My special thanks to PVS-Studio PVC Logo


My special thanks to logo


My special thanks to Take Up Code TakeUpCode 450 60



I'm happy to give online seminars or face-to-face seminars worldwide. Please call me if you have any questions.

Bookable (Online)


Standard Seminars (English/German)

Here is a compilation of my standard seminars. These seminars are only meant to give you a first orientation.

  • C++ - The Core Language
  • C++ - The Standard Library
  • C++ - Compact
  • C++11 and C++14
  • Concurrency with Modern C++
  • Design Pattern and Architectural Pattern with C++
  • Embedded Programming with Modern C++
  • Generic Programming (Templates) with C++


  • Clean Code with Modern C++
  • C++20

Contact Me

Modernes C++,



-1 #1 power washing ct 2016-07-10 07:43
Thanks for some other informative site. Where else may just I am getting that kind of information written in such a perfect way?
I have a project that I'm simply now operating on, and I have been at the glance out for such info.
0 #2 Krystal 2016-12-18 04:00
Pretty section of content. I just stumbled upon your weblog and in accession capital to assert that I
acquire in fact enjoyed account your blog posts.

Any way I'll be subscribing to your feeds and even I achievement you access
consistently fast.
+1 #3 victor 2017-09-14 08:40
Thanks for all these posts. I am learning lots of new stuff from them. I have a question though. Is fetch_mult actually an atomic operation? If the atomic object "shared" changes between the shared.load call and the shared.compare_exchange_strong call you may get hung at the while loop.
0 #4 Pranabesh Das 2018-02-25 06:03
Hi Rainer,

I'm really learning a lot from your posts with great insight of modern multi threading techniques in C++ and like to thank you wholeheartedly for that.

However, I've a doubt regarding Mutex and Atomic operations.

I was comparing some performance measures between using Mutexes and Atomics.

I implemented a matrix dot-product (collected from Net) using a mutex and the same algo using an atomic variable, like this:

Solution 1:
static std::mutex myMutex;
std::lock_guard mtex(myrMutex);

Solution 2:
std::atomic &result;

In both cases, two vectors of equal lengths are taken and each thread is performing one element by element multiplication and add the "partial result" of each such multiplication in a scalar variable called "result". For example:

result += partial_sum;

While running these two different solution side-by-side on an 8-core with 1 to 100 threads, I've observed that sometimes the Atomic operation is slower than the mutex operation.
For example (using ):

For 1 threads, Mutex - Atomic performance diff is: -0.0032668
For 2 threads, Mutex - Atomic performance diff is: 0.0070272
For 3 threads, Mutex - Atomic performance diff is: 0.0050193
For 4 threads, Mutex - Atomic performance diff is: -0.0031304
For 5 threads, Mutex - Atomic performance diff is: -0.0195508
For 6 threads, Mutex - Atomic performance diff is: 0.0166401
For 98 threads, Mutex - Atomic performance diff is: -0.0070265
For 99 threads, Mutex - Atomic performance diff is: 0.0030019
For 100 threads, Mutex - Atomic performance diff is: 0.0040034

Could you please kindly explain the reason that why sometimes Atomic is performing slower than Mutex?

0 #5 Rainer Grimm 2018-03-01 06:19
I did similar test and didn't get your numbers.

I assume, you have compiled if with full optimisation.

If you sum up the local results with minimal synchronisation the kind of synchronisation does not matter; therefore I would assume quite the same numbers for atomics or locks.

Stay Informed about my Mentoring



English Books

Course: Modern C++ Concurrency in Practice

Course: C++ Standard Library including C++14 & C++17

Course: Embedded Programming with Modern C++

Course: Generic Programming (Templates)

Course: C++ Fundamentals for Professionals

Course: The All-in-One Guide to C++20

Course: Master Software Design Patterns and Architecture in C++

Subscribe to the newsletter (+ pdf bundle)

All tags

Blog archive

Source Code


Today 3333

Yesterday 5555

Week 33541

Month 55215

All 12133424

Currently are 189 guests and no members online

Kubik-Rubik Joomla! Extensions

Latest comments