All parallel algorithms are intended to have signatures that are
equivalent to the ISO C++ algorithms replaced. For instance, the
std::adjacent_find
function is declared as:
namespace std { template<typename _FIter> _FIter adjacent_find(_FIter, _FIter); }
Which means that there should be something equivalent for the parallel version. Indeed, this is the case:
namespace std { namespace __parallel { template<typename _FIter> _FIter adjacent_find(_FIter, _FIter); ... } }
But.... why the ellipses?
The ellipses in the example above represent additional overloads required for the parallel version of the function. These additional overloads are used to dispatch calls from the ISO C++ function signature to the appropriate parallel function (or sequential function, if no parallel functions are deemed worthy), based on either compile-time or run-time conditions.
Compile-time conditions are referred to as "embarrassingly
parallel," and are denoted with the appropriate dispatch object, i.e.,
one of __gnu_parallel::sequential_tag
,
__gnu_parallel::parallel_tag
,
__gnu_parallel::balanced_tag
,
__gnu_parallel::unbalanced_tag
,
__gnu_parallel::omp_loop_tag
, or
__gnu_parallel::omp_loop_static_tag
.
Run-time conditions depend on the hardware being used, the number
of threads available, etc., and are denoted by the use of the enum
__gnu_parallel::parallelism
. Values of this enum include
__gnu_parallel::sequential
,
__gnu_parallel::parallel_unbalanced
,
__gnu_parallel::parallel_balanced
,
__gnu_parallel::parallel_omp_loop
,
__gnu_parallel::parallel_omp_loop_static
, or
__gnu_parallel::parallel_taskqueue
.
Putting all this together, the general view of overloads for the parallel algorithms look like this:
ISO C++ signature
ISO C++ signature + sequential_tag argument
ISO C++ signature + parallelism argument
Please note that the implementation may use additional functions
(designated with the _switch
suffix) to dispatch from the
ISO C++ signature to the correct parallel version. Also, some of the
algorithms do not have support for run-time conditions, so the last
overload is therefore missing.
Several aspects of the overall runtime environment can be manipulated by standard OpenMP function calls.
To specify the number of threads to be used for an algorithm, use the
function omp_set_num_threads
. An example:
#include <stdlib.h> #include <omp.h> int main() { // Explicitly set number of threads. const int threads_wanted = 20; omp_set_dynamic(false); omp_set_num_threads(threads_wanted); // Do work. return 0; }
Other parts of the runtime environment able to be manipulated include
nested parallelism (omp_set_nested
), schedule kind
(omp_set_schedule
), and others. See the OpenMP
documentation for more information.
To force an algorithm to execute sequentially, even though parallelism
is switched on in general via the macro _GLIBCXX_PARALLEL
,
add __gnu_parallel::sequential_tag()
to the end
of the algorithm's argument list, or explicitly qualify the algorithm
with the __gnu_parallel::
namespace.
Like so:
std::sort(v.begin(), v.end(), __gnu_parallel::sequential_tag());
or
__gnu_serial::sort(v.begin(), v.end());
In addition, some parallel algorithm variants can be enabled/disabled/selected at compile-time.
See compiletime_settings.h
and
See features.h
for details.
The default parallelization strategy, the choice of specific algorithm
strategy, the minimum threshold limits for individual parallel
algorithms, and aspects of the underlying hardware can be specified as
desired via manipulation
of __gnu_parallel::_Settings
member data.
First off, the choice of parallelization strategy: serial, parallel,
or implementation-deduced. This corresponds
to __gnu_parallel::_Settings::algorithm_strategy
and is a
value of enum __gnu_parallel::_AlgorithmStrategy
type. Choices
include: heuristic, force_sequential,
and force_parallel. The default is
implementation-deduced, i.e. heuristic.
Next, the sub-choices for algorithm implementation. Specific
algorithms like find
or sort
can be implemented in multiple ways: when this is the case,
a __gnu_parallel::_Settings
member exists to
pick the default strategy. For
example, __gnu_parallel::_Settings::sort_algorithm
can
have any values of
enum __gnu_parallel::_SortAlgorithm: MWMS, QS,
or QS_BALANCED.
Likewise for setting the minimal threshold for algorithm
parallelization. Parallelism always incurs some overhead. Thus, it is
not helpful to parallelize operations on very small sets of
data. Because of this, measures are taken to avoid parallelizing below
a certain, pre-determined threshold. For each algorithm, a minimum
problem size is encoded as a variable in the
active __gnu_parallel::_Settings
object. This
threshold variable follows the following naming scheme:
__gnu_parallel::_Settings::[algorithm]_minimal_n
. So,
for fill
, the threshold variable
is __gnu_parallel::_Settings::fill_minimal_n
Finally, hardware details like L1/L2 cache size can be hardwired
via __gnu_parallel::_Settings::L1_cache_size
and friends.
All these configuration variables can be changed by the user, if
desired. Please
see settings.h
for complete details.
A small example of tuning the default:
#include <parallel/algorithm> #include <parallel/settings.h> int main() { __gnu_parallel::_Settings s; s.algorithm_strategy = __gnu_parallel::force_parallel; __gnu_parallel::_Settings::set(s); // Do work... all algorithms will be parallelized, always. return 0; }
One namespace contain versions of code that are always
explicitly sequential:
__gnu_serial
.
Two namespaces contain the parallel mode:
std::__parallel
and __gnu_parallel
.
Parallel implementations of standard components, including
template helpers to select parallelism, are defined in namespace
std::__parallel
. For instance, std::transform
from algorithm
has a parallel counterpart in
std::__parallel::transform
from parallel/algorithm
. In addition, these parallel
implementations are injected into namespace
__gnu_parallel
with using declarations.
Support and general infrastructure is in namespace
__gnu_parallel
.
More information, and an organized index of types and functions related to the parallel mode on a per-namespace basis, can be found in the generated source documentation.