The Pipes-and-Filters architecture pattern describes the structure of systems that process data streams.
The Pipes-and-Filters pattern is similar to the Layers Pattern. The idea of the Layers Pattern is to structure the system in layers so that higher layers are based on the services of lower layers. The Pipes-and-Filters naturally extend the Layers Pattern, using the layers as filters and the data flow as pipes.
- A system that processes data in several steps
- Each step processes its data independently from the other
- Divide the task into several processing steps
- Each processing step is the input for the next processing step
- The processing step is called a filter; the data channel between the filters is called a pipe
- The data comes from the data source and ends up in the data sink
- Gets input data
- Performs its operation on the input data
- Produces output data
- Transmits data
- Buffers data in a queue
- Synchronizes neighbors
- Produces input to the processing pipeline
- Consumes data
The most interesting part of the Pipes-and-Filter is the data flow.
There are several ways to control the data flow.
- The filter is started by passing the data of the previous filter
- The (n-1)-th filter sends (write operation) data to the n-th filter
- The data source starts the data flow
- The filter is started by requesting data from the previous filter
- The n-th filter requests data from the (n-1)-th filter
- The data sink starts the data flow
Mixed Push/Pull Principle
- The n-th filter requests data from the (n-1)-th filter and explicitly passes it to the (n+1)-th filter
- The n-th filter is the only active filter in the processing chain
- The n-th filter starts the data flow
Active Filters as Independent Processes
- Each filter is an independent process that reads data from the previous queue or writes data to the following queue
- The n-th filter can read data only after the (n-1)-th filter has written data to the connecting queue
- The n-th filter can write its data only after the (n+1)-th filter has read the connecting queue
- This structure is known as the Producer/Consumer
- Each filter can start the data flow
The most prominent example of the Pipes-and-Filters Pattern is the UNIX Command Shell.
Unix Command Shell
Find the five python files in my python3.6 installation that have the most lines:
Here are the steps of the pipeline:
- Find all files ending with py:
find -name "*.py"
- Get from each file its number of lines:
xargs wc -l
- Sort numerical:
- Remove the last two lines having irrelevant statistical information:
head -n -2
- Get the five last lines:
Finally, here is the classic of command line processing using pipes from Douglas Mcllroy.
If you want to know what this pipeline does, read the full story behind it in the article “More shell, less egg“.
Thanks to the ranges library in C++20, the Pipes-and-Filters Pattern is directly supported in C++.
The following program
firstTenPrimes.cpp displays the first ten primes starting with 1000.
The data source (
std::views::iota(1'000)) creates the natural number, starting with 1000. First, the odd numbers are filtered out (line 1), and then the prime numbers (line 2). This pipeline stops after ten values (line 3) and pushes the elements onto the
std::vector (line 4). The convenient function
std::ranges::to creates a new range (line 4). This function is new with C++23. Therefore, I can only execute the code with the newest windows compiler on the compiler explorer.
Pros and Cons
I use in my following comparison the term universal interface. This means all filters speak the same language, such as xml or jason.
- When one filter pushes or pulls the data directly from its neighbor, no intermediate buffering of data is necessary
- An n-th filter implements the Layers Pattern and can, therefore, easily be replaced
- Filters, implementing the universal interface, can be reordered
- Each filter can work independently of the other and has not had to wait until the neighbored filter is done. This enables the optimal distribution of work between the filters.
- Filters can run in a distributed architecture. The pipes connect the remote entities. The pipes can also split or synchronize the data flow. Pipes-and-Filters are heavily used in distributed or concurrent architectures and provide excellent performance and scalability opportunities.
- The parallel processing of data may be inefficient due to communication, serialization, and synchronization overhead
- A filter such as a sort needs the entire data
- If the processing power of the filters is not homogenous, you need big queues between them
- To support the universal interface, that data must be formatted between the filters
- The most complicated part of this pattern is error handling. When the Pipes-and-Filters architecture crashes during the data processing, you have data that is not partially and fully processed. Now, you have a few options:
- Start the process once more if you have the original data.
- Use only the fully processed data.
- Introduce markers in your input data. You start the process based on the markers when your system crashes.
The Broker structures distributed software systems that interact with remote service invocations. It is responsible for coordinating the communication, its results, and exceptions. In my next post, I will dive deeper into the architectural pattern Broker.
Thanks a lot to my Patreon Supporters: Matt Braun, Roman Postanciuc, Tobias Zindl, G Prvulovic, Reinhold Dröge, Abernitzke, Frank Grimm, Sakib, Broeserl, António Pina, Sergey Agafyin, Андрей Бурмистров, Jake, GS, Lawton Shoemake, Jozo Leko, John Breland, Venkat Nandam, Jose Francisco, Douglas Tinkham, Kuchlong Kuchlong, Robert Blanch, Truels Wissneth, Kris Kafka, Mario Luoni, Friedrich Huber, lennonli, Pramod Tikare Muralidhara, Peter Ware, Daniel Hufschläger, Alessandro Pezzato, Bob Perry, Satish Vangipuram, Andi Ireland, Richard Ohnemus, Michael Dunsky, Leo Goodstadt, John Wiederhirn, Yacob Cohen-Arazi, Florian Tischler, Robin Furness, Michael Young, Holger Detering, Bernd Mühlhaus, Matthieu Bolt, Stephen Kelley, Kyle Dean, Tusar Palauri, Dmitry Farberov, Juan Dent, George Liao, Daniel Ceperley, Jon T Hess, Stephen Totten, Wolfgang Fütterer, Matthias Grün, Phillip Diekmann, Ben Atakora, Ann Shatoff, Rob North, Bhavith C Achar, Marco Parri Empoli, moon, and Philipp Lenk.
Thanks, in particular, to Jon Hess, Lakshman, Christian Wittenhorst, Sherhy Pyton, Dendi Suhubdy, Sudhakar Belagurusamy, Richard Sargeant, Rusty Fleming, John Nebel, Mipko, Alicja Kaminska, Slavko Radman, and David Poole.
|My special thanks to Embarcadero|
|My special thanks to PVS-Studio|
|My special thanks to Tipi.build|
|My special thanks to Take Up Code|
|My special thanks to SHAVEDYAKS|
I’m happy to give online seminars or face-to-face seminars worldwide. Please call me if you have any questions.
- Embedded Programmierung mit modernem C++ 12.12.2023 – 14.12.2023 (Präsenzschulung, Termingarantie)
Standard Seminars (English/German)
Here is a compilation of my standard seminars. These seminars are only meant to give you a first orientation.
- C++ – The Core Language
- C++ – The Standard Library
- C++ – Compact
- C++11 and C++14
- Concurrency with Modern C++
- Design Pattern and Architectural Pattern with C++
- Embedded Programming with Modern C++
- Generic Programming (Templates) with C++
- Clean Code with Modern C++
- Phone: +49 7472 917441
- Mobil:: +49 176 5506 5086
- Mail: schulung@ModernesCpp.de
- German Seminar Page: www.ModernesCpp.de
- Mentoring Page: www.ModernesCpp.org
Modernes C++ Mentoring,