Exploit data parallelism using hpx simd and par_simd policies.

Srinivas Yadav

Introduction

Vectorization is a technique to allow incore parallelism using CPU vector registors which enables us to exploit data-parallelism. Recent additions in C++17 and C++20 to parallel algorithms accept execution policy as first argument which changes the execution behaviour based on the given policy. We implement two new execution policies hpx::execution::simd and hpx::execution::par_simd. The former policy does execution in sequential fashion with Vectorization added, where as the latter one does execution in parallel with Vectorization. For both of these newly implemented policies, the iterator function now no longer accepts static types instead accept only generic types i.e templated or generic function objects. This allows the function object to work with both non-simd and simd policies with a very little or no change in the code. We used std::experimental::simd (_available in C++20 with GCC >= 11.1 and Clang >= 12) as the Vectorization backend in implementing the 2 new execution policies. In the following sections we dicuss example codes on how to use these new facilites adapted to hpx and the benchmarks with results performed on various architectures using different kernels.

Example Usage

The following example code snippet describes the use of hpx for_each algorithm with different execuction policies such as seqpar, simd and par_simd.

Note that we passed a generic lambda to for_each algorithm as argument because the same lambda can be used with different execution policies. The template argument ExPolicy is used to accept execution policy, T is used for handling data-types for creating std::vector and Gen is used to accept geneartor function to fill the std::vector. If the execution policy is seq or par then variable x would be of arithmetic type T as intfloat and so on.. where as if execution policy is simd or par_simd then x would of type std::experimental::simd<T> as std::experimental::simd<int>std::experimental::simd<float> and so on. std::experimental::simd<T> is a vector_pack of type T which is value_type of iterator nums. Contigous elements of the iterator are loaded internally into the vector_pack. The sin and cos functions in the lambda are adapted to arithmetic types and vector_pack types (available in std::experimental namespace).

The lambda used in this code snippet is a compute-bound kernel because of high Arithmetic Intensity due to loop running for 100 steps performing sin and cos operations at each step.

Now, we look into another example using a memory bound kernel (performing SAXPY Operation) with help of transform algorithm.

This code snippet is very similar to the previous with change in lambda and algorithm. Here as well, the arguments to lambda is of two types either arithmetic type (if execution policy is seq or par), or vector_pack type (if the execution policy is simd or par_simd).

The following code snippet describes the usage of algorithms such as count, find. These do not require any lambda and hence vectorization is straightforward just with implementation itself.

This class of algorithms is much easier and are more prone to getting vectorized because of minimum intervention with users i.e no lamda or function is taken in the arguments. Note that we get vectorization benefits only if the iterators passed to algorithm are random access iterators.

Example Implementation

The above code snippet shows implementation for datapar_loop function, which is main vectorization backend helper function for most of the iterative algorithms in hpx. The function call in datapar_loop class can be divided into 3 main steps.

  • First a prefix loop runs the code in sequential fashion by calling each element with function f using the helper function datapar_loop_step::call1. This loop runs until it finds first aligned element.
  • Secondly, the main vectorization loop, where actual vectorization happens with datapar_loop_step::callv function which actually creates a vector_pack then loads the elements from iterator and calls the function f and then stores back the results as below.
  • Finally, the last block i.e post-fix loops handles the elements at the end of array or container that are less than vector_pack size and hence cannot be fit into single vector_pack. So they are handled in sequential fashion similar to pre-fix loop.

Benchmarks

We ran the benchmarks for 2 classes of algorithms. First class being iterative algorithms where each element from the iterator gets mapped using some function. We used for_each and transform algorithms with compute bound and memory bound kernels. For the second class of algorithms, we pick the algorithms which have consists of conditional statements and these can be called as algorithms with simd mask reductions. For this class, we chose count and find algorithms.

Results

The Following figure shows benchmark of for_each algorithm with compute bound kernel i.e Example 1 These benchmarks were run on Intel Xeon Skylake with AVX512AVX512 vector register can hold 16 floating point elements. We can see a 12x speed up with simd policy and over 140x with par_simd.

all

The above image shows the benchmark results graph depeciting speed ups of simdpar and par_simd against seq execution policy. These benchmarks were run on AMD EPYC 7H12 with AVX2AVX2 vector register can hold 8 floating point elements.The array used contains 128 Billion elements with float and double as data types. We can see super-linear scaling for simd speed up in compute bound kernels i.e speed up of simd (10.37) is more than vector_pack size (8) because sin and cos implementations for scalar arithmetic types and vector_pack types are slightly different. We can see a 3 order magnitude of speed up when using par_simd execution policy.

Conclusion

From the examples illustrated and the benchmarks, we can see how easy it is to vectorize the code using simd and par_simd execution policies and gain massive speed ups with very little change in code. Currently adapting algorithms to simd and par_simd policies is still under progress. You can find the list of algorithms adapted to these policies .

GSoC 2021 – Add vectorization to par_unseq implementations of Parallel Algorithms

by Srinivas Yadav

GSoC 2021 Final Report

Abstract

HPX algorithms support data parallelism through explicit vectorization using Vc library and only for a few algorithms like for_each, transform and count, but recently the support for Vc library has been deprecated and has been replaced by std::experimental::simd. In this project I have adapted many algorithms to datapar using new backend std::experimental::simd with two new policies simd and par_simd using the data-parallel types proposed in the experimental namespace. For all the algorithms adapted to datapar, separate tests have been created.

I have created a new github repository namely std-simd-perf for the benchmarks of the algorithms that I have adapted to datapar which have various plots for speed up analysis and roofline model for artificial benchmarks and real world applications.

Pull Requests for HPX Repo

Merged

Open

Other Adapted Algorithms to datapar [code]: 

  • adjacent_difference
  • adjacent_find
  • all_of , any_of, none_of
  • copy
  • count
  • find
  • for_each
  • generate
  • transform

Performance Benchmarks

  • The std-simd-perf repository contains all the benchmarks for simd on artificial algorithms such as for_each, transform, count, find etc.. and on real world examples such as Mandelbrot set.
  • These benchmarks were run on different clusters and have separate branches for each architecture in the repo.
  • Speed up plot for a compute bound kernel using for_each algorithm
  • Speed up plot for a simd reduction based algorithm using count algorithm

Beyond GSoC

  • Adapt #2333 rest of the algorithms to support data parallel.
  • I will be further working with STE||AR GROUP for HPX in other areas as well as this is a great community to learn with great people and expand my knowledge.

Acknowledgements

Special thanks to Hartmut Kaiser, Nikunj Gupta and Auriane R. for all the guidance and help with frequent meetings.

GSoC 2021 – Adapting algorithms to C++ 20 and Ranges TS

by Akhil Nair

Introduction:

My main task involves adapting the remaining algorithms from this issue to C++ 20 by using the tag_invoke CPO mechanism to add the correct overloads for the algorithms as mentioned by the C++20 standard. It also involves adding ranges and sentinel overloads for these algorithms as well as ensuring that the base implementations support sentinels. I also added doxygen documentation for each overload.

We have managed to cover almost all algorithms thanks to previous contributions prior to the 2021 GSoC period from Giannis, Hartmut, Mikael and others as well as from Chuanqiu He and Karame for adapting the rotate/rotate_copy and adjacent_difference respectively.

Apart from the adaptation work, I have also created PRs adding the shift_left and shift_right algorithms (Issue #3706) and the ranges starts_with and ends_with algorithms (Issue #5381) and they’re currently under review.

Details:

Tag_invoke:

We render the old hpx::parallel overloads as deprecated and add new tag_fallback_dispatch overloads according to the function signatures specified in the C++ 20 standard using the tag_invoke CPO mechanism for dispatching the call to the correct overloads.

The segmented overloads for an algorithm use tag_dispatch and the normal parallel and container overloads use the tag_fallback_dispatch, so that all the overloads of the segmented overloads are preferred before falling back to the remaining parallel overloads.

Range and sentinel overloads:

C++ 20 introduced the ranges overloads for many of the algorithms and we have done the same for our algorithms, available in the hpx::ranges namespace.

We can pass a range as either a single range argument or by using an iterator-sentinel pair. The range overloads also make use of tag_fallback_dispatch for overload resolution.

Separating the segmented overloads:

For algorithms having segmented overloads, we add tag_dispatch overloads and remove the forward declarations in both files to seperate the segmented overloads completely from the parallel overloads.

Shift left and shift right algorithms:

Shift left and shift right algorithms have been added. They make use of reverse in the parallel implementations (anyone reading this in the future, feel free to attempt a more efficient parallel implementation if possible). Range and sentinel overloads for these algorithms have been added as well. Ranges starts_with and ends_with algorithms have been added too.

Other:

I’ve also been looking into the senders and receivers proposal and looking into the performance issues of the scan partitioner by trying to measure the execution time and scheduling of the various stages of the scan algorithm.

PR Details:

The following PRs have been merged as of writing this report :-

Open PRs currently under review :-

My experience:

My experience working with and being mentored by the STE||AR Group has been amazing. This being my second gsoc, I was looking for an organization that had both challenging and interesting work and a helpful and supportive community, and the STE||AR Group ticked off both of those boxes wonderfully.

Hartmut and Giannis were amazing mentors and have been very helpful. The weekly meetings with them and Auriane were very useful to keep track of the progress and get guidance on how to proceed. Thanks to Hartmut, Auriane and Mikael for reviewing my PRs. I’m also grateful for the help of other members of the community who were very helpful and responsive on the IRC chat.

Over the summer my understanding of C++ has definitely increased, though there is a LOT more to cover, although I’m sure continuing to work on HPX (and asking questions on the IRC) will help with that. Having access to and being able to ask questions to the community members who have such a deep understanding of the topics is a very valuable advantage of contributing to HPX.

I fully intend to continue working on HPX and with the STE||AR Group after GSoC is over and look forward to learning and working on more interesting stuff in the coming months.

HPX’s Season of Docs 2021

By Rachitt Shah

HPX was recently selected to be part of Google’s Season of Docs (GSoD), a program designed to improve the documentation of open source software, as well as being a Google Summer of Code organization.

GSoD aims to cover and create the documentation gaps faced by organizations due to various reasons, alongside giving technical writers who initially just wanted to know how much do editors make an avenue to showcase their skills.

I will be helping in the organization and update of the prior documentation to make it into a more navigable and to provide a user-friendly structure, which many users have had issues with using the current documentation. I will work closely with the HPX team and our users to collect feedback, find user pain-points, and improve preexisting docs, which mainly comprise of the build instructions.

Alongside, I would create a “design document” containing guidelines for how to add new content to the documentation: tips on how to structure new sections, general guidelines on what sort of content should be presented in what chapters, etc. The project may also include content rearrangement and a change of hierarchy, if the users find it is needed. 

I am currently working on a timeline and action items and researching about the possible shift to another documentation platform.

I am reachable at rachitt01@gmail.com or on the IRC as rachitt_shah, please contact me to suggest changes to the documentation or to provide feedback. We can always benefit from your ideas.

About me, I’m an undergrad studying electronics as my major, and I’m a casual sport programmer as well. I’ve been a product manager and venture capital intern in the past, and done Google Summer of Code with OpenAstronomy.

GSoD Final Report

By Rebecca Stobaugh.

We’ve reached the end of Google’s Season of Docs, and we’ve accomplished a lot in the past three months. My initial proposal was to work on three sections of the manual, and we have far exceeded our goal, managing to make changes to twelve different sections of the documentation. The majority of the work I’ve done has consisted of cleaning up grammatical errors and improving sentence structure. I have also added a style guide to the wiki, which should help standardize future changes to the documentation. The style guide can be found in the “HPX Source Code Structure and Coding Standards” wiki document under the section “Documentation Style Guide”. For a complete list of my pull requests during Season of Docs, please see here. To view my changes to the wiki, please see here.

Announcing HPX’s Season of Docs

By Rebecca Stobaugh

HPX was recently selected to be part of Google’s Season of Docs (GSoD), a program designed to improve the documentation of open source software. While the STE||AR Group has created extensive documentation for HPX, this documentation has been written by several different people, which has led to some inconsistencies and awkward organization. I am a technical writer and English PhD student who has been selected to edit and streamline HPX’s documentation. My goal is to clean up the content, concentrating on both grammatical issues and design concerns, to create a more cohesive, user-friendly product. My primary focus will be on two sections of the STE||AR Group’s instruction manual: “HPX Build System” and “Launching and Configuring HPX Applications.” If time allows, I will also revise the “Why HPX” page, with an emphasis on condensing and trimming repetitive content.

You can read my GSoD proposal here.

Trip Report: ICML 2019


By: Bita Hasheminezhad

A few weeks ago, I had the opportunity to attend the International Conference on Machine Learning (ICML) which is the premier gathering of professionals dedicated to the advancement of machine learning. Thirty-Sixth ICML was held on June 9th to 15th at the Long Beach Convention Center.

Compiling and Running Blazemark

By Shahrzad Shirzad

Blazemark is the benchmark suite for Blaze library. In order to compile and run Blazemark with HPX backend, take the following steps:

  1. Change the Configfile at blaze/blazemark by filling in the CXX=, CXXFLAGS=, LIBRARY_DIRECTIVES= fields in the Configfile:
    This is an example of the configurations used for Clang:
    # Compiler selection
    CXX="clang++"
    # Special compiler flags
    CXXFLAGS="-O3 -march=native -std=c++17 -stdlib=libc++ -DNDEBUG -fpermissive -DBLAZE_USE_HPX_THREADS -isystem /hpx/install/path/include -Wl,-wrap=main"
    # Library settings (optional)
    # In some cases it might be necessary to specify additional library paths and add additional
    # libraries. This can be done via this setting.
    LIBRARY_DIRECTIVES="-L/hpx/install/path/lib/ -lhpx -rdynamic /hpx/install/path/lib/libhpx_init.a -ldl -lrt -lhpx_wrap - L/boost/install/path/lib -lboost_system -lboost_program_options"
  2. ./configure Configfile
  3. make benchmark_name
  4. ./bin/benchmark_name

Notes:

  • You can change vector or matrix sizes to run the benchmark on through the benchmark_name.prm file located at /blaze/blazemark/params folder.

For more information on available benchmarks, command line parameters, and also the list of supported libraries please visit Blazemark.

Compiling and Running BlazeTest

By Shahrzad Shirzad

BlazeTest is a testing tool provided by Blaze. In order to compile and run BlazeTest with HPX backend, take the following steps:

  1.  Change the Configfile at blaze/blazetest by filling in the CXX=, CXXFLAGS=, LIBRARY_DIRECTIVES= fields in the Configfile:
    This is an example of the configurations used for Clang:
    # Compiler selection
    CXX="clang++"
    # Special compiler flags
    CXXFLAGS="-O3 -march=native -std=c++17 -stdlib=libc++ -DNDEBUG -fpermissive -DBLAZE_USE_HPX_THREADS -isystem /hpx/install/path/include -Wl,-wrap=main"# Library settings (optional)
    # In some cases it might be necessary to specify additional library paths and add additional
    # libraries. This can be done via this setting.
    LIBRARY_DIRECTIVES="-L/hpx/install/path/lib/ -lhpx -rdynamic /hpx/install/path/lib/libhpx_init.a -ldl -lrt -lhpx_wrap - L/boost/install/path/lib -lboost_system -lboost_program_options"
  2.  ./configure Configfile
  3.  make essentials
  4.  ./run