C++ Core Guidelines: More Rules to Performance

Contents[Show]

In this post, I continue my journey through the rules to performance in the C++ Core Guidelines.  I will mainly write about design for optimisation.

athlete bicycle bike 12838

Here are the two rules for today. 

Per.7: Design to enable optimization

When I read this title, I immediately have to think about move semantic. Why? Because you should write your algorithms with move semantic and not with copy semantic. You will automatically get a few benefits. 

  1. Of course, instead of an expensive copy, your algorithms use a cheap move. 
  2. Your algorithm is way more stable because it requires no memory and you will, therefore, get no std::bad_alloc exception.
  3. You can use your algorithm with move-only types such as std::unique_ptr

Understood! Let me implement a generic swap algorithm which uses move semantic.

// swap.cpp

#include <algorithm>
#include <cstddef> 
#include <iostream>
#include <vector>

template <typename T>                                                // (3)
void swap(T& a, T& b) noexcept {
    T tmp(std::move(a));
    a = std::move(b);
    b = std::move(tmp);
}

class BigArray{

public:
    BigArray(std::size_t sz): size(sz), data(new int[size]){}

    BigArray(const BigArray& other): size(other.size), data(new int[other.size]){
        std::cout << "Copy constructor" << std::endl;
        std::copy(other.data, other.data + size, data);
    }
    
    BigArray& operator=(const BigArray& other){                      // (1)
        std::cout << "Copy assignment" << std::endl;
        if (this != &other){
            delete [] data;
            data = nullptr;
			
            size = other.size;
            data = new int[size];
            std::copy(other.data, other.data + size, data);
        }
        return *this;
    }
    
    ~BigArray(){
        delete[] data;
    }
private:
    std::size_t size;
    int* data;
};

int main(){

  std::cout << std::endl;

  BigArray bigArr1(2011);
  BigArray bigArr2(2017);
  swap(bigArr1, bigArr2);                                           // (2)

  std::cout << std::endl;

};

 

Fine. That was it. No! My coworker gave me his type BigArray. BigArray has a few flaws. I will write about the copy assignment operator (1) later. First of all, I have a more serious concern. BigArray does not support move semantic but only copy semantic. What will happen if I swap the BigArrays in line (2)?  My swap algorithm uses move semantic (3) under the hood. Let's try it out.

swap

 

Nothing bad will happen. Traditionell copy semantic will kick in and you will get the classical behaviour. Copy semantic is a kind of fallback to move semantic. You can see it the other way around. Move is an optimised copy. 

How is that possible? I asked for a move operation in my swap algorithm. The reason is that std::move returns a rvalue. A const lvalue reference can bind to an rvalue and the copy constructor or a copy assignment operator takes a const lvalue reference. If BigArray would have a move constructor or a move assignment operator taking rvalue references both would have higher priority than the copy pendants.  

Implementing your algorithms with move semantic means that move semantic will automatically kick in if you data types support it. If not copy semantic will be used as a fallback. In the worst case, you will have the classical behaviour.

I said the copy assignment operator has a few flaws. Here are they:

BigArray& operator=(const BigArray& other){                      
    if (this != &other){                                 // (1)
        delete [] data;                                        
        data = nullptr;
			
        size = other.size;
        data = new int[size];                            // (2)
        std::copy(other.data, other.data + size, data);  // (3)
    }
    return *this;
}

 

  1. I have to check for self-assignment. Most of the times self-assignment will not happen, but I check always for the special case.
  2. If the allocation will fail, this was already modified. The size is wrong and data is already deleted. This means the copy constructor only guarantees the basic exception guarantee but not the strong one. The basic exception guarantee states that there is no leak after an exception. The strong exception guarantees in case of an exception that the program can be rolled back to the state before. For more details to exception safety, read the Wikipedia article about exception safety.
  3. The line is identical to the line in the copy constructor.

 You can overcome this flaws by implementing your own swap function. This is already suggested by the C++ Core Guidelines: C.83: For value-like types, consider providing a noexcept swap function. Here is the new BigArray having a non-member swap function and a copy assignment operator using the swap function. 

class BigArray{

public:
    BigArray(std::size_t sz): size(sz), data(new int[size]){}

    BigArray(const BigArray& other): size(other.size), data(new int[other.size]){
        std::cout << "Copy constructor" << std::endl;
        std::copy(other.data, other.data + size, data);
    }
	
    BigArray& operator = (BigArray other){                  // (2)
        swap(*this, other);                                 
        return *this;
    }
    
    ~BigArray(){
        delete[] data;
    }
	
    friend void swap(BigArray& first, BigArray& second){    // (1)
        std::swap(first.size, second.size);
        std::swap(first.data, second.data);
    }
	
private:
    std::size_t size;
    int* data;
};

 

The swap function in line (1) is not a member; therefore a call swap(bigArray1, bigArray2) uses it.  The signature of the copy assignment operator in line (2) may surprise you. Because of the copy, no self-assignment test is necessary. Additionally, the strong exception guarantee holds, and there is no code duplication. This technique is called the copy-and-swap idiom.

They are a lot of overloaded versions of std::swap available. The C++ standard provides about 50 overloads. 

Per.10: Rely on the static type system

This is a kind of meta-rule in C++. Catch errors at compile-time. I can make my explanation of this rule quite short because I already have written a few articles on this important topic:

  • Use automatic type deduction with auto (auto-matically initialized) in combination with {}-initialisation and you will get a lot of benefits.
    1. The compiler knows always the right type: auto f = 5.0f.
    2. You can never forget to initialise a type: auto a; will not work.
    3. You can verify with {}-initialisation that no narrowing conversion will kick in; therefore you can guarantee that the automatically deduced type is the type you expected: int i = {f}; The compiler will check in this expression that f is, in this case, an int. If not, you will get a warning. This will not happen without braces: int i = f;.
  • Check with static_assert and the type-traits library type properties at compile time. If the check fails you will get a compile-time error: static_assert<std::is_integral<T>::value, "T should be an integral type!").
  • Make type-safe arithmetic with the user-defined literals and the new built-in literals(user-defined literals): auto distancePerWeek=  (5 * 120_km + 2 * 1500m - 5 * 400m) / 5;.  
  • overrride and final provide guarantees to virtual methods. The compiler checks with override that you actually overrode a virtual method. The compiler guarantees further with final that you can not override a virtual method which is declared final
  • The New Null Pointer Constant nullptr cleans in C++11 up with the ambiguity of the number 0 and the macro NULL.

What's next?

My journey through the rules to performance will go on. In the next post, I will in particular write about how to move computation from runtime to compile-time and how you should access memory.

 

 

Thanks a lot to my Patreon Supporters: Eric Pederson, Paul Baxter,  Sai Raghavendra Prasad Poosa, Meeting C++, Matt Braun, and Avi Lachmish.

 

 

Get your e-book at leanpub:

The C++ Standard Library

 

Concurrency With Modern C++

 

Get Both as one Bundle

cover   ConcurrencyCoverFrame   bundle
With C++11, C++14, and C++17 we got a lot of new C++ libraries. In addition, the existing ones are greatly improved. The key idea of my book is to give you the necessary information to the current C++ libraries in about 200 pages.  

C++11 is the first C++ standard that deals with concurrency. The story goes on with C++17 and will continue with C++20.

I'll give you a detailed insight in the current and the upcoming concurrency in C++. This insight includes the theory and a lot of practice with more the 100 source files.

 

Get my books "The C++ Standard Library" (including C++17) and "Concurrency with Modern C++" in a bundle.

In sum, you get more than 550 pages full of modern C++ and more than 100 source files presenting concurrency in practice.

 

Add comment


Subscribe to the newsletter (+ pdf bundle)

Blog archive

Source Code

Visitors

Today 1703

All 1174155

Currently are 202 guests and no members online

Kubik-Rubik Joomla! Extensions

Latest comments