[cfe-dev] [RFC] Proposal to contribute Intel’s implementation of C++17 parallel algorithms

Jeff Hammond via cfe-dev cfe-dev at lists.llvm.org
Sun Dec 3 17:45:37 PST 2017

It would be nice to keep PSTL and OpenMP orthogonal, even if _Pragma("omp
simd") does not require runtime support.  It should be trivial to use
_Pragma("clang loop vectorize(assume_safety)") instead, by wrapping all of
the different compiler vectorization pragmas in preprocessor logic.  I
similarly recommend _Pragma("GCC ivdep") for GCC and _Pragma("vector
always") for ICC.  While this requires O(n_compilers) effort instead of
O(1), but orthogonality is worth it.

While OpenMP is vendor/compiler-agnostic, users should not be required to
use -fopenmp or similar to enable vectorization from PSTL, nor should the
compiler enable any OpenMP pragma by default.  I know of cases where merely
using the -fopenmp flag alters code generation in a performance-visible
manner, and enabling the OpenMP "simd" pragma by default may surprise some
users, particularly if no other OpenMP pragmas are enabled by default.


(who works for Intel but not on any software products and has been a heavy
user of Intel PSTL since it was released, if anyone is keeping track of

On Wed, Nov 29, 2017 at 4:21 AM, Kukanov, Alexey via cfe-dev <
cfe-dev at lists.llvm.org> wrote:
> Hello all,
> At Intel, we have developed an implementation of C++17 execution policies
> for algorithms (often referred to as Parallel STL). We hope to contribute
> to libc++/LLVM, so would like to ask the community for comments on this.
> The code is already published at GitHub (
> It supports the C++17 standard execution policies (seq, par, par_unseq)
as well as
> the experimental unsequenced policy (unseq) for SIMD execution. At the
> about half of the C++17 standard algorithms that must support execution
> are implemented; a few more will be ready soon, and the work continues.
> The tests that we use are also available at GitHub; needless to say we
> contribute those as well.
> The implementation is not specific to Intel’s hardware. For thread-level
> it uses TBB* (https://www.threadingbuildingblocks.org/) but abstracts it
> an internal API which can be implemented on top of other
threading/parallel solutions –
> so it is for the community to decide which ones to use. For SIMD
> (unseq, par_unseq) we use #pragma omp simd directives; it is
vendor-neutral and
> does not require any OpenMP runtime support.
> The current implementation meets the spirit but not always the letter of
> the standard, because it has to be separate from but also coexist with
> implementations of standard C++ libraries. While preparing the
> we will address inconsistencies, adjust the code to meet community
> and better integrate it into the standard library code.
> We are also proposing that our implementation is included into
> Compatibility between the implementations seems useful as it can
> reduce the amount of work for everyone. We hope to keep the code mostly
> and would like to know if you think it’s too optimistic to expect.
> Obviously we plan to use appropriate open source licenses to meet the
> projects’ requirements.
> We expect to keep developing the code and will take the responsibility for
> maintaining it (with community contributions, of course). If there are
> community efforts to implement parallel algorithms, we are willing to
> We look forward to your feedback, both for the overall idea and – if
supported –
> for the next steps we should take.
> Regards,
> - Alexey Kukanov
> * Note that TBB itself is highly portable (and ported by community to
Power and ARM
> architectures) and permissively licensed, so could be the base for the
> infrastructure. But the Parallel STL implementation itself does not
require TBB.
> _______________________________________________
> cfe-dev mailing list
> cfe-dev at lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev

Jeff Hammond
jeff.science at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/cfe-dev/attachments/20171203/9414557b/attachment.html>

More information about the cfe-dev mailing list