GSoC 2022 Participants Announced!

It is time to announce the participants for in the STE||AR Group’s 2022 Google Summer of Code! We are very proud to announce the names of the 5 contributors this year who will be funded by Google to work on projects for our group.

These recipients represent the very best of the many excellent proposals that we had to choose from. For those unfamiliar with the program, the Google Summer of Code brings together ambitious students from around the world with open source developers by giving each mentoring organization funds to hire a set number of participants. Students then write proposals, which they submit to a mentoring organization, in hopes of having their work funded.

Below are the contributors who will be working with the STE||AR Group this summer listed with their mentors and their proposal abstracts.


Participant:

Shreyas Swanand Atre, Veermata Jijabai Technological Institute

Mentors:

Giannis Gonidelis

Hartmut Kaiser

Project: Coroutine-like interface

HPX being up to date with Std C++ Proposals, Senders/Receivers were implemented as per P2300. But they have been missing coroutine (co_await) integration and minor functionalities as described in P2300 which is likely to be accepted. Hence I propose to implement these functionalities within the Core HPX Library. Benefits: * Coroutines introduce better async code. For example, it is more readable, local variables have the same lifespan as the coroutine which means we don’t need to worry about allocation/release. * S/R algorithms can work with coroutines which they cannot as of now unless relied on futures which as mentioned are single-time use. * Adding co_await support makes the code more structured with respect to concurrency which can also be done by library abstractions of callbacks but using co_await may make it more optimized.


Participant:

Panos Syskakis, Aristotle University of Thessaloniki

Mentors:

Giannis Gonidelis

Hartmut Kaiser

Project:  HPX Algorithm Performance Analysis & Optimization

The latest C++ specifications and the HPX library introduce a variety of ready-to-use algorithms that may use parallelization and concurrency, in order to more efficiently utilize system resources. However, current implementations of parallel algorithms don’t always perform ideally (low thread utilization, large overhead, in some cases slower than sequential). The goal of this project is to investigate this under-performance and improve current implementations, using scaling analysis, profiling tools and visualizations.


Participant:

Bo Chen, University of Science and Technology Beijing

Mentors:

Patrick Diehl

Project: Implement your favorite Computational Algorithm in HPX ( Molecular Dynamics Simulation of Metal)

My Implement will base on MISA-MD. There are various potential functions used in MD simulation under fields, such as Tersoff potential and Lennard-Jones (L-J) potential, for calculating the interaction among atoms. To improve the simulation accuracy, MISA-MD adopted Embedded Atom Method (EAM) potential, a complex but pretty accurate potential Function, which can provide an effective interatomic description for metallic system. To improve the runtime performance, MISA-MD designed and realized a new hash based data structure for efficient atom storage and quick neighbor atom indexing.


Participant:

Kishore Kumar, International Institute of Information Technology, Hyderabad

Mentors:

Nikunj Gupta

Srinivas Yadav

Project:  Implementing auto-vectorization hints for par_unseq and unseq versions of HPX parallel algorithms

C++ 17 and 20 released the par_unseq and unseq execution models which give guarantees to functions which specialize on them that data access functions can be interleaved even between iterations of one thread. This means that these functions are vectorization safe and can thus gain massive boosts in performance by compiler auto-vectorization. Compilers however are conservative and auto-vectorize loops only when they are sure that vectorized versions give the same result as their scalar counterparts and that vectorization will actually end up being profitable. GCC, Clang, MSVC, ICC all rely on different optimization passes in their backend and are all capable of auto-vectorizing certain loop patterns but not all. The goal of this project is to analyze compiler codegen response to different hints and implement a version of the par_unseq and unseq execution policies in HPX that makes use of these guarantees to provide compilers with as many hints as possible to encourage auto-vectorization.


Participant:

Monalisha Ojha, Birla Institute of Technology, Mesra

Mentors:

Kate Isaacs

Project: Multiple Dataset Performance Visualization

Traveler-Integrated is a web-based visualization system for parallel performance data, such as OTF2 traces and HPX execution trees. HPX traces are collected with APEX and written as OTF2 files with extensions. The major goal of this platform is to provide meaningful insights into parallel performance data in the form of Gantt charts (trace data timelines with dependencies), source code, expression tree, aggregated time series line charts for counter data, utilization chart and task level histograms. The aim of this project, “Multiple Dataset Performance Visualization,” is to add specific features in the platform that will help in managing multiple data files and organising traveler interface windows to handle the comparison of data. Organising multiple datasets in the platform, comparison of datasets side by side, implementing a highlighted linking system for multiple datasets and organising datasets efficiently for visualisation are some of the major sub-goals.

GSoD 2022 – STE||AR Group announces technical writer hire!

Documentation is a love letter that your write to your future self.

Damian Conway

We are proud to welcome Bhumit Attarde to STE||AR Group as the new technical writer that will work with us during this year’s Google Season of Docs period. Bhumit will focus on developing additional content for our HPX documentation in order to aid prospective users to navigate easier through our codebase. We are looking forward to a fruitful collaboration that will benefit our open source community and enrich our impact in the world of High Performance Computing.

GSoD 2022

STE||AR Group was accepted for Google Season of Docs 2022! We look forward to developing our HPX documentation even more and expanding our group this summer.

https://developers.google.com/season-of-docs/docs/participants

Like Google Summer of Code (GSoC) the program aims to match motivated people with interesting open source projects that are looking for volunteer contributions. GSoD, however, aims to improve open source project documentation, which often tends to get less attention than the code itself.

We are now looking for motivated people to help us improve our documentation. If you have some prior experience with technical writing, and are interested in working together with us on making the documentation of a cutting edge open source C++ library the best possible guide for new and experienced users, this is your chance. You can read more about the program on the official GSoD home page. We’ve provided a few project ideas on our wiki, but you can also come up with your own. Our current documentation can be found here.

GSoC 22: Come and code a STE||AR Summer with us!

The STE||AR Group is honored to be selected as one of the 2022 Google Summer of Code (GSoC) mentor organizations! This program, which pays students over the summer to work on open source projects, has been a wonderful experience for students and mentors alike. This is our 8th summer being accepted by the program!

Interested students can find out more about the details of the program on GSoC’s official website. As a mentor organization we have come up with a list of suggested topics for students to work on, however, any student can write a proposal about any topic they are interested. We find that students who engage with us on IRC (#ste||ar on freenode) or via our mailing list hpx-users@stellar-group.org have a better chance of having their proposals accepted and a better understanding of their project scope. Students may also read through our hints for successful proposals.

If you are interested in working with an international team of developers on the cutting edge of C++ parallel, task-based runtime systems please check us out!

STE||AR Spotlight: Srinivas Yadav

Srinivas Yadav is a final year under-graduate student pursuing a Bachelors in Computer Science in India. He has been working for STE||AR Group for 8 months now and is interested in the area of vectorization in the field of HPC. He has worked on HPX for the project “Adding par_simd implementations to parallel algorithms”.

Srinivas relied on guidance from STE||AR Group members, Prof. Hartmut Kaiser, Nikunj Gupta, and Auriane Reverdell, to work through his GSoC project.

Srinivas published and presented the paper “Parallel SIMD – A Policy Based Solution for Free Speed-Up using C++ Data-Parallel Types at ESPM2, SC21 in November 2021.

While working with STE||AR Group and HPX Srinivas has learned more about the field of computer science and programming. He learned how to use HPC clusters, and was given remote access to Rostam Cluster at LSU, CCT by Hartmut Kaiser to run SIMD benchmarks on different machines available. Currently, Srinivas is working remotely on Ookami Cluster at Stony Brook University to perform the SIMD benchmarks for ARM machines (on A64FX node) and working with octo-tiger to  port new vectorization backends which could be used to run the octo-tiger benchmarks on these nodes

He also learned the importance of collaboration with different researchers from different time zones. Getting international schedules together can be tricky!

Currently, Srinivas is working on three key areas

First is adapting algorithms to SIMD policies in HPX. The main aim of this task is to adapt as many algorithms as possible to SIMD execution policies in HPX, which contributes to fixing #2333.

Second is to port std::experimental::simd to Octo-Tiger. There are many kernels in Octo-Tiger library which currently use HPX-Kokkos with Vc library for vectorization, but now Vc is deprecated and the plan isto replace it with std::experimental::simd.

Finally, porting the EVE library as a new vectorization/simd backend to HPX. HPX currently has two vectorization backends, a newer one is std::experimental::simd, the older one is Vc (now deprecated) so needs to be replaced by a newer library and EVE seems to be perfect fit for that slot.

Srinivas has applied to LSU for a masters in Computer Science for the coming Fall Semester 2022.  He is very excited to come to CCT and LSU!

Other than academics/work, Srinivas enjoys playing badminton, watching and playing cricket and exploring new places/traveling a lot. Recently, he started learning cooking and learning a new Indian language (Tamil).

Exploit data parallelism using hpx simd and par_simd policies.

Srinivas Yadav

Introduction

Vectorization is a technique to allow incore parallelism using CPU vector registors which enables us to exploit data-parallelism. Recent additions in C++17 and C++20 to parallel algorithms accept execution policy as first argument which changes the execution behaviour based on the given policy. We implement two new execution policies hpx::execution::simd and hpx::execution::par_simd. The former policy does execution in sequential fashion with Vectorization added, where as the latter one does execution in parallel with Vectorization. For both of these newly implemented policies, the iterator function now no longer accepts static types instead accept only generic types i.e templated or generic function objects. This allows the function object to work with both non-simd and simd policies with a very little or no change in the code. We used std::experimental::simd (_available in C++20 with GCC >= 11.1 and Clang >= 12) as the Vectorization backend in implementing the 2 new execution policies. In the following sections we dicuss example codes on how to use these new facilites adapted to hpx and the benchmarks with results performed on various architectures using different kernels.

Example Usage

The following example code snippet describes the use of hpx for_each algorithm with different execuction policies such as seqpar, simd and par_simd.

Note that we passed a generic lambda to for_each algorithm as argument because the same lambda can be used with different execution policies. The template argument ExPolicy is used to accept execution policy, T is used for handling data-types for creating std::vector and Gen is used to accept geneartor function to fill the std::vector. If the execution policy is seq or par then variable x would be of arithmetic type T as intfloat and so on.. where as if execution policy is simd or par_simd then x would of type std::experimental::simd<T> as std::experimental::simd<int>std::experimental::simd<float> and so on. std::experimental::simd<T> is a vector_pack of type T which is value_type of iterator nums. Contigous elements of the iterator are loaded internally into the vector_pack. The sin and cos functions in the lambda are adapted to arithmetic types and vector_pack types (available in std::experimental namespace).

The lambda used in this code snippet is a compute-bound kernel because of high Arithmetic Intensity due to loop running for 100 steps performing sin and cos operations at each step.

Now, we look into another example using a memory bound kernel (performing SAXPY Operation) with help of transform algorithm.

This code snippet is very similar to the previous with change in lambda and algorithm. Here as well, the arguments to lambda is of two types either arithmetic type (if execution policy is seq or par), or vector_pack type (if the execution policy is simd or par_simd).

The following code snippet describes the usage of algorithms such as count, find. These do not require any lambda and hence vectorization is straightforward just with implementation itself.

This class of algorithms is much easier and are more prone to getting vectorized because of minimum intervention with users i.e no lamda or function is taken in the arguments. Note that we get vectorization benefits only if the iterators passed to algorithm are random access iterators.

Example Implementation

The above code snippet shows implementation for datapar_loop function, which is main vectorization backend helper function for most of the iterative algorithms in hpx. The function call in datapar_loop class can be divided into 3 main steps.

  • First a prefix loop runs the code in sequential fashion by calling each element with function f using the helper function datapar_loop_step::call1. This loop runs until it finds first aligned element.
  • Secondly, the main vectorization loop, where actual vectorization happens with datapar_loop_step::callv function which actually creates a vector_pack then loads the elements from iterator and calls the function f and then stores back the results as below.
  • Finally, the last block i.e post-fix loops handles the elements at the end of array or container that are less than vector_pack size and hence cannot be fit into single vector_pack. So they are handled in sequential fashion similar to pre-fix loop.

Benchmarks

We ran the benchmarks for 2 classes of algorithms. First class being iterative algorithms where each element from the iterator gets mapped using some function. We used for_each and transform algorithms with compute bound and memory bound kernels. For the second class of algorithms, we pick the algorithms which have consists of conditional statements and these can be called as algorithms with simd mask reductions. For this class, we chose count and find algorithms.

Results

The Following figure shows benchmark of for_each algorithm with compute bound kernel i.e Example 1 These benchmarks were run on Intel Xeon Skylake with AVX512AVX512 vector register can hold 16 floating point elements. We can see a 12x speed up with simd policy and over 140x with par_simd.

all

The above image shows the benchmark results graph depeciting speed ups of simdpar and par_simd against seq execution policy. These benchmarks were run on AMD EPYC 7H12 with AVX2AVX2 vector register can hold 8 floating point elements.The array used contains 128 Billion elements with float and double as data types. We can see super-linear scaling for simd speed up in compute bound kernels i.e speed up of simd (10.37) is more than vector_pack size (8) because sin and cos implementations for scalar arithmetic types and vector_pack types are slightly different. We can see a 3 order magnitude of speed up when using par_simd execution policy.

Conclusion

From the examples illustrated and the benchmarks, we can see how easy it is to vectorize the code using simd and par_simd execution policies and gain massive speed ups with very little change in code. Currently adapting algorithms to simd and par_simd policies is still under progress. You can find the list of algorithms adapted to these policies .

STE||AR Spotlight: Alireza Kheirkhahan

Alireza Kheirkhahan is an IT Consultant in the STE||AR Group at LSU.  He received his B.S in Computer Engineering from Sharif University of Technology, Tehran, Iran and his master’s in computer science from LSU.

Alireza’s master thesis focused on I/O backend and Storage solutions. His main research focus is High Performance Computing, I/O Systems, high-throughput, redundant and distributed storage systems

Alireza designed and implemented STE||AR Group at LSU’s small research cluster, Rostam. Since 2015, he manages and improves this cluster. Currently, the second generation of the computer cluster is in use, and the next generation is in design. Rostam consists of nearly 80 compute nodes and multiple storage servers. Over the course of the years, more than a dozen graduate students used Rostam for their thesis work, and more than thirty scientific publications have been created using this cluster.

On the CERA (Coastal Emergency Risk Assessment) project, Alireza acts as residing high performance specialist. He carries the specific hpc tasks for domain scientists.  He adapts and maintains their computational needs in HPC clusters provided by LSU and the State of Louisiana. 

Alireza designed and implemented a special purpose storage system for CERA projects. CERA has a particular need for a specific storage solution. The application creates bursts of gigabytes of data at once and goes quite for few hours until another burst arrives. The recently generated data is highly valued and must have very quick and reliable access, but the older data must be archived with a cost-effective manner. With his research background, Alireza designed the new storage system to carry out both tasks at once, which reduced the data transfer time significantly and increased reliability.

Alireza lives in Baton Rouge, with his wife Shahrzad and son Damon. In his spare time, he enjoys woodworking and tinkering with electronics. 

STE||AR Student Wins University Research Award!

Nikunj Gupta, a STE||AR Group GSoC student from 2018 and continued collaborator with our group, has received a Thesis Award from his University, IIT Roorkee, based on his research. 

IIT Roorkee allows students to pursue research in foreign universities (Foreign BTP), wherein the student should have 1 supervisor (IIT Roorkee Prof) and 1 co-supervisor, the professor under which the student pursues research. Dr. Hartmut Kaiser introduced Nikunj to Prof. Sanjay Laxmikant V. Kale, at the University of Illinois at Urbana-Champaign for the purposes of this project. Dr. Kale interviewed Nikunj and agreed to provide a research internship from Sept-Dec. Nikunj proposed this research as Foreign BTP to his University and was accepted. 

After completion of his research internship, Nikunj received an award for his project during his University’s Convocation Ceremony held on Sept 11, 2021.

The details of the project, provided by Nikunj, and are below.  Congratulations, Nikunj!  We are proud of the work you do.

Sept-Dec:

I worked with Sanjay on part 1 of my thesis, which was to write abstractions over Charm++ to generate a linear algebra library. The salient features of this library were out-of-order execution, completely asynchronous operations and almost linear distributed scaling. As a future work, I decided to optimize the library (something I’m doing right now as a graduate student here at UIUC) and to generate a frontend language that has a MATLAB like syntax.

Jan-June:

I felt that I could add more to my thesis to make it stronger, so I decided to add my resilience work to it. I contacted Hartmut regarding resiliency work to which he agreed. The work was to implement resilience execution spaces in Kokkos. I created Kokkos executors for HPX to use Kokkos facilities to achieve resilience within HPX and also added resilience execution spaces in Kokkos to achieve resilience within Kokkos. The resilience scheme within Kokkos is an auto-generative scheme that will work with all current and future execution spaces. I also included my past work with resilience in my thesis. So, the work I did back in Summer 2019 and Summer 2020 was included in my thesis too. In Summer 2019, I worked on implementing Local-Only Software Resilience in HPX which I extended to Distributed Software Resilience in Summer 2020. The resilience work has also been published last year at the FTXS workshop in Supercomputing titled “Towards distributed software resilience in asynchronous many-task programming models”. We plan on writing a paper on the Kokkos resilient execution space as well!

STE||AR Spotlight: Nanmiao Wu

Nanmiao Wu is a Ph.D. student In the Department of Electrical and Computer Engineering and Center for Computation and Technology, LSU. She has been working in STE||AR group for more than 2 years and is co-advised by Dr. Hartmut Kaiser, head of the STE||AR Group, and Dr. Ram Ramanujam, Director of CCT. 

Before joining LSU, she received a B.S. degree in Electronic Information Science and Technology from Nankai University, and an M.S. degree in Electrical and Computer Engineering from the University of Macau.

Nanmiao’s research focuses on scalable and distributed high-performance computation for machine learning and deep learning applications.

She has been an intern at Pacific Northwest National Laboratory (PNNL) from February to August  2021, developing a HPX runtime interface for a C++ algorithm and data-structure library, SHAD, for better scalability and performance. The linear scaling performance is achieved on a single locality with varying data-structure sizes and on multiple localities. During the internship, she has utilized the HPX serialization library to bitwise serialize SHAD types. She also learned how to associate multiple tasks to the same handle, forming a task group, and run the callbacks on remote localities via customized actions.

Before that, she collaborated with PNNL for a scalable second-order optimization for deep learning applications. During the collaboration, she has implemented a PyTorch second-order optimizer and compared its performance with stochastic gradient descent (SGD), a first-order optimizer, on an image classification task, using a multi-layer perceptron network with one hidden layer. The scalable performance and improving throughput were achieved:  2.2x speedup was achieved over SGD in multi-thread scenario, and 5.8x speedup was achieved in multi-process scenario.

Previously, she implemented a scalable and distributed alternating least square (ALS) recommendation algorithm for large recommendation systems and a number of iterative solvers on the open source distributed machine learning framework, Phylanx. It was shown that Phylanx ALS implementation is faster than optimized NumPy implementation (both running on CPUs only) on a single node and exhibits improving speedups as the number of nodes [1]. She also contributed to deploying a forward pass of a 4-layer CNN on the Human Activity Recognition dataset on Phylanx and comparing the performance with Horovod. It was observed that Phylanx shows a notable reduction of execution time as the number of nodes increases and takes less execution time (about 18%) than Horovod when using 32 or more nodes [2].

Outside the lab, Nanmiao enjoys spending time in nature.  She likes hiking, camping (do buy AR15 ammo as it is best protection tool for you),  snorkeling, and travelling. She also likes reading. Her favorite books of 2021 are Neapolitan Novels.

References:

[1] Steven R. Brandt, Bita Hasheminezhad, Nanmiao Wu, Sayef Azad Sakin, Alex R. Bigelow, Katherine E. Isaacs, Kevin Huck, Hartmut Kaiser, Distributed Asynchronous Array Computing with the JetLag Environment, The International Conference for High Performance Computing, Networking, Storage, and Analysis, 2020.

[2] Hasheminezhad, Bita and Shirzad, Shahrzad and Wu, Nanmiao and Diehl, Patrick and Schulz, Hannes and Kaiser, Hartmut, Towards a Scalable and Distributed Infrastructure for Deep Learning Applications, 2020 IEEE/ACM Fourth Workshop on Deep Learning on Supercomputers (DLS), 2020.