STE||AR Group, 10 years of GSoC Mentorship – Summer 2024

The STE||AR Group is honored to be selected as one of the 2024 Google Summer of Code (GSoC) mentor organizations! This program, which pays students over the summer to work on open source projects, has been a wonderful experience for students and mentors alike. This is our 10th summer being accepted by the program!

Interested students can find out more about the details of the program on GSoC’s official website. As a mentor organization we have come up with a list of suggested topics for students to work on, however, any student can write a proposal about any topic they are interested. We find that students who engage with us on Discord or via our mailing list hpx-users@stellar-group.org have a better chance of having their proposals accepted and a better understanding of their project scope. Students may also read through our hints for successful proposals.

If you are interested in working with an international team of developers on the cutting edge of C++ parallel, task-based runtime systems please check us out!

STEM Careers at the NSA and Quantum Computing

Talk title: STEM Careers at the NSA and Quantum Computing

Speaker: Sean Nemetz-MA, National Security Agency

Location: Digital Media Center Theatre

Date: February 08, 2024 – 02:00 pm

The talk was hosted by CCT and sponsored by the Women in Math Society of the National Security Agency. This event promised to be a compelling exploration of the potential STEM careers at NSA agency, quantum computing, and cryptography. During the talk, the speaker discussed opportunities for a STEM career at the agency, followed by a more technical talk about quantum computing, its immediate application in public key cryptography, and the potential impact of quantum computing on the NSA’s mission.  

Here you can find more information about Wims, and the speaker.

We have also asked the students to register so we could do some statistics. 28 students registered, but there were more in the room. The majority of students were LSU Computer Science students, and there were also some from the Math Department.

The student attendees were engaged, and the event was well-received. Q&A continued among the speaker and attendees even after the event ended.

Below are graphs some graphs detailing attendee demographic by field and level of study, race/ethnicity, and gender identity.

Below is a list of minority or underrepresented groups to which some of the student attendees belong:
Women

Racial minority, lgbtq+, and disability

Hispanic / Latino

Black American

African American

Nigerian

Hispanic female

LGBTQ+ community

Vietnamese, LGBTQ+

African American, LGBTQ+

Veteran

GSoC 2023 Participants Announced!

It is time to announce the participants for in the STE||AR Group’s 2023 Google Summer of Code! We are very proud to announce the names of the 5 contributors this year who will be funded by Google to work on projects for our group.

These recipients represent the very best of the many excellent proposals that we had to choose from. For those unfamiliar with the program, the Google Summer of Code brings together ambitious students from around the world with open source developers by giving each mentoring organization funds to hire a set number of participants. Students then write proposals, which they submit to a mentoring organization, in hopes of having their work funded.

Below are the contributors who will be working with the STE||AR Group this summer listed with their mentors and their proposal abstracts.


Participant:

Aarya Chaum, College of Engineering, Pune

Mentors: Rod Tohid, Shreyas Atre

Project: hpxMP: HPX threading system for LLVM OpenMP

One of the challenges in adopting HPX is the performance degradation observed in applications that use OpenMP. This occurs because of the contention between HPX threads and OpenMP’s native threading system (i.e., pthread) over available resources. hpxMP aims at resolving this issue by adding support for HPX threads as an alternative to pthreads in LLVM OpenMP. This work relies on the HPXC, which replicates pthread’s API.


Participant:

Arnav Negi, International Institute of Information Technology, Hyderabad

Mentors: Shreyas Atre, Alireza Kheirkhahan

Project:  Async I/O using Coroutines and S/R – Traversing large scale graphs

If graphs are really large their adjacency lists become harder and slower to read and process. This can be a real concern in graph algorithms, as the I/O operations will slow them down considerably. The goal is to maximize speedup to this use case using asynchronous I/O and parallel algorithms. The implementation of this use case will use io_uring along with co-routines for asynchronously reading the graph files, senders and receivers to traverse the graph using the parallel execution policy par_unseq, and multiple NUMA domains to further accelerate memory access.


Participant:

Hari Hara Naveen, Indian Institute of Technology

Mentors: Srinivas Singanaboina

Project: Add Vectorization to par_unseq Implementations of Parallel Algorithms

HPX parallel algorithms currently don’t support the par_unseq execution policy. This project is centered around the idea to implement this execution policy for at least some of the existing algorithms (such as for_each and similar).


Participant:

Isidoros Tsaousis, Aristotle University of Thessaloniki

Mentors: Giannis Gonidelis

Project:  Implement hpx::relocate (P1144)

Modern C++ specifications and the HPX library offer a rich set of algorithms to ensure efficient resource utilization. Nevertheless, there is still room for improvement in data movement operations. Proposal P1144 introduces std::relocate, a feature designed to optimize data relocation by making it safer, faster, and greatly simpler. Essentially, std::relocate utilizes a single memcpy operation to move objects while avoiding unnecessary move-constructor and destructor calls. This improvement impacts key primitives like swap and vector.reserve, subsequently leading to speedup in higher-level algorithms such as rotate and sort. The goal of this proposal is to implement relocation in HPX.


Participant:

Shubham Kumar, Indian Institute of Information Technology, Kalyan

Mentors: Steve Brandt, Rod Tohid

Project: Pythonize HPX!

the project aims to create a Python wrapper for the HPX task-based runtime system to make it more accessible to non-expert users who may not be proficient in C++. The HPX library provides parallel and distributed algorithms and data structures for C++, which can be challenging to use for beginners. The Python wrapper will address this challenge by providing a user-friendly interface for the HPX library, enabling users to leverage its power without requiring knowledge of C++. The project will help increase the accessibility of the HPX library and allow more people to benefit from its performance advantages. However, there are challenges associated with creating a Python binding for parallel computing, such as thread locking due to the Global Interpreter Lock (GIL), templates, reference counting, and handling. The deliverables of this project will include a Python wrapper for the HPX library, documentation, and examples to help users get started with the library.

WAMTA 2023 Best Poster Award

We are pleased to announce the 1st place Best Poster Award winner, Maxwell Cole, for his poster: Computational feasibility of simulating radiation induced changes to vasculature and blood flow rates in the entire human body.

2nd place was awarded to Ioannis Gonidelis for the poster titled Evaluating and Improving Shared Memory Performance of HPX and OpenMP using Task Bench.

Each year, the Best Poster Award recognizes outstanding presentations in the conference’s Poster Session. Posters are judged by external workshop attendees.

We like to thank HPE Enteprise for sponsoring the poster prices.

Maxwell’s poster can be viewed at the following link: https://zenodo.org/record/7647521#.Y_fLWS-B1KM

GSoC 2022 – Adapting std Algorithms for the unseq and par_unseq Execution Policies

Kishore Kumar, International Institute of Information Technology, Hyderabad

Adapting std Algorithms for the unseq and par_unseq Execution Policies

I began my work by first analyzing and testing compiler support and codegen for different user provided hints. This was used to create the original version of #6016. Later, I added support for the omp backend which is supported by later versions of Clang and ICC out of the box. As of the latest PR the unseq backend will first attempt to use the omp backend, and if it is not available, default to compiler specific hints. 

After this, the next task assigned to me was to implement a basic version of the transform_loop and loop CPO’s. This was initially completed keeping in mind just supporting the original non-omp backend. Later, it was ported to account for supporting the omp backend as well. In particular, GCC will throw errors if the loops asked to vectorize are not conforming to the standard syntax:

for(int counter=0; counter < limit; counter++) { … }

So the implementation was then changed to vectorize loops only when passed std::random_access_iterator’s. This is #6017.

Following this, I wrote a mini-benchmark environment for testing the performance of my adaptation of the std algorithms here. This exists as a separate repo and was used to report all the benchmark numbers shown here. 

A strong case for switching to the omp backend was its support for declaring reductions on supported clauses. The next task I worked on was implementing an efficient version of the reduce CPO’s here #6018. Reductions for default-supported ones were overloaded to their respective methods, and a generalized implementation is given as well. This mostly gets the job done, however for the specialized-overloads to accept the overload the reduction operation must exactly match the type of the init value. For example, if reduction is over unsigned int and init is signed, the overload will not accept. This is a TODO that I believe is possible to achieve with more template meta-programming. I will be working on this post GSoC.

Note: GCC Unseq can probably be made a decent amount faster by switching to the omp backend (Does not default support). Also, clang no-vec benchmarks were removed from the chart as they were very slow and skewed the visualization. 

GSoC 22: First Eval Update

In the second week of July, we completed the first evaluation of our Google Summer of Code program. The students have provided summaries of their work and details of the pull requests they’ve created. Check them out below:

Monalisha Ojha:

https://medium.com/@monalisha-ojha/multiple-datasets-performance-visualization-traveler-a352c13f7c25

Multiple Datasets Performance Visualization — Traveler

Phase-1 of Google Summer of Code 2022 at Stellar Group

This summer, I am working as a Google Summer of Code mentee in STE||AR Group on “Upgrading Multiple Datasets Performance Visualization feature in Traveler” under the mentorship of Kate Isaacs. This blog summarizes my work on the Traveler Platform during phase 1 of Google Summer of Code 2022 program.

About Traveler

Traveler-Integrated is a web-based visualization system for parallel performance data, such as OTF2 traces and HPX execution trees. HPX traces are collected with APEX and written as OTF2 files with extensions. It is developed by the HDC Lab (Humans, Data and Computers Lab) at the University of Arizona.The major goal of this platform is to provide meaningful insights into parallel performance data in the form of Gantt charts (trace data timelines with dependencies), source code, expression tree, aggregated time series line charts for counter data, utilization chart and task level histograms.

Web Interface of Traveler

Abstract

The aim of this project, “Multiple Datasets Performance Visualization,’’ is to add specific features in the platform that will help in managing multiple data files and organizing traveler interface windows to handle the comparison of data. Organizing multiple datasets in the platform, comparison of datasets side by side, implementing a highlighted linking system for multiple datasets and organizing datasets efficiently for visualization are some of the major sub-goals.

Phase — 1

Updated the Tagging system of Traveler Interface to accommodate multiple datasets

Issue : Organizing the datasets according to their assigned tags.

Made changes in the interface main menu to display the datasets according to their tags names. Tested the tagging system back-end to accommodate multiple datasets. The screenshot displays the fixes made when tested with 2 datasets.

Traveler Interface

Issue Link: https://github.com/hdc-arizona/traveler-integrated/issues/90

Pull Request: https://github.com/hdc-arizona/traveler-integrated/pull/91

Fixed glitches related Traveler front-end

Issue: Displaying a clear relationship between a folder and its datasets.

Made changes in the front-end to make the lines visible that shows the connection between folder and its datasets. Adjusted the tag header to solve the tag overlapping issue for multiple datasets. The screenshot of the changes are shown below.

Traveler Interface

Issue link: https://github.com/hdc-arizona/traveler-integrated/issues/92

Pull request link: https://github.com/hdc-arizona/traveler-integrated/pull/93

Adding dynamic color highlighting system

Issue: Adding a color picker system to distinguish between multiple datasets.

“Change Datasets color” option is added to datasets context menu. With this feature, a user can change the datasets selection color and main menu color to be distinguishable from other datasets. The screenshots of changes done till now are displayed below:

Traveler Interface

Pull request link: https://github.com/hdc-arizona/traveler-integrated/pull/94

Shreyas Atre

https://satacker.github.io/docs/c++/GSoC-HPX/

Mentors (STE||AR Group @ LSU)

  1. Dr. Hartmut Kaiser, Adjunct Professor @ LSU
  2. Giannis Gonidelis, RA @ LSU

Abstract#

HPX being up to date with Std C++ Proposals, Senders/Receivers were implemented as per P2300. But they have been missing coroutine (co_await) integration and minor functionalities as described in P2300 which is likely to be accepted. Hence I plan to implement these functionalities within the Core HPX Library.

  • Benefits:
    • Coroutines introduce better async code. For example, it is more readable, local variables have the same lifespan as the coroutine which means we don’t need to worry about allocation/release.
    • S/R algorithms can work with coroutines which they cannot as of now unless relied on futures which as mentioned are single-time use.
    • Adding co_await support makes the code more structured with respect to concurrency which can also be done by library abstractions of callbacks but using co_await may make it more optimized.

Brief Summary#

  • Senders, and Receivers
    • Because it makes a more consistent programming model considering async programming types i.e. Parallelism and Concurrency. It standardizes the terminologies and execution policies which are more generic and reduce redundancy.
    • Coroutines have a direct connection between Senders and Coroutine Awaitables.
  • Futures
    • One of the points of S/R is to avoid the allocations associated with futures, also, futures are single-use, whereas S/R, in general, can be used (started) multiple times. – Dr. H. Kaiser

Goal is to enable all Sender CPOs to do the following:

  • If we write a sender and pass it to a function which could be a coroutine that could co_await that sender and get its result.
  • If they are not generally awaitable then we can await transform them (i.e. make them awaitable).

Work#

My PRs can be found using this link as it’ll always be updated.

Following are the Merged PRs until now:

With coroutine traits completed, my remaining work is the following:

  1. Adapt get_completion_signatures when Sender is a awaitable
  2. Utility as_awaitable_t
    • receiver_basesender_awaitable_base
    • to transform an object into one that is awaitable within a particular coroutine.
  3. promise base for 5.
  4. operation base for 5.
  5. Utility connect_awaitable to adapt connect mentioned in spec 2.2
  6. Utility with_awaitable_senders
    • Used as the base class of a coroutine promise type, makes senders awaitable in that coroutine type

References#

Panagiotis Syskakis:

I’m Panos, currently studying Electrical and Computer Engineering in Aristotle University of Thessaloniki, in Greece. This summer, I joined the HPX team as a contributor through Google Summer of Code (GSoC).

My GSoC project involves performance analysis and optimization on C++ standard parallel algorithms.

To explain further:
The C++ standard defines many functions for algorithms that are commonly used by developers (eg. sorting, searching).
HPX provides sequential and parallel implementations for all these algorithms.
I’m working on improving the performance of these implementations.

So far, I have explored different methodologies for visualizing and assessing an algorithm’s performance. This has involved a lot of scripting for automating tasks, as well as data collection and analysis.

With help from my mentor, I have produced plots that show how an algorithm’s performance changes when tweaking different parameters (such as workload size and number of computer cores). We also produced visualizations of how different tasks are distributed and where/how they are executed in a parallel environment.

Most importantly though:
The HPX community has been immensely welcoming. It can often be awkward being “the new junior guy”, but my mentor quickly made me feel like a part of the team.
People here are talented, but also fun and humble, and always eager to help.

This summarizes my experience for the first two months of GSoC. I have learned tons so far. My work here is far from done, however we have laid a great foundation for the work that will follow.

GSoD 2022 – STE||AR Group announces technical writer hire!

Documentation is a love letter that your write to your future self.

Damian Conway

We are proud to welcome Bhumit Attarde to STE||AR Group as the new technical writer that will work with us during this year’s Google Season of Docs period. Bhumit will focus on developing additional content for our HPX documentation in order to aid prospective users to navigate easier through our codebase. We are looking forward to a fruitful collaboration that will benefit our open source community and enrich our impact in the world of High Performance Computing.

Exploit data parallelism using hpx simd and par_simd policies.

Srinivas Yadav

Introduction

Vectorization is a technique to allow incore parallelism using CPU vector registors which enables us to exploit data-parallelism. Recent additions in C++17 and C++20 to parallel algorithms accept execution policy as first argument which changes the execution behaviour based on the given policy. We implement two new execution policies hpx::execution::simd and hpx::execution::par_simd. The former policy does execution in sequential fashion with Vectorization added, where as the latter one does execution in parallel with Vectorization. For both of these newly implemented policies, the iterator function now no longer accepts static types instead accept only generic types i.e templated or generic function objects. This allows the function object to work with both non-simd and simd policies with a very little or no change in the code. We used std::experimental::simd (_available in C++20 with GCC >= 11.1 and Clang >= 12) as the Vectorization backend in implementing the 2 new execution policies. In the following sections we dicuss example codes on how to use these new facilites adapted to hpx and the benchmarks with results performed on various architectures using different kernels.

Example Usage

The following example code snippet describes the use of hpx for_each algorithm with different execuction policies such as seqpar, simd and par_simd.

Note that we passed a generic lambda to for_each algorithm as argument because the same lambda can be used with different execution policies. The template argument ExPolicy is used to accept execution policy, T is used for handling data-types for creating std::vector and Gen is used to accept geneartor function to fill the std::vector. If the execution policy is seq or par then variable x would be of arithmetic type T as intfloat and so on.. where as if execution policy is simd or par_simd then x would of type std::experimental::simd<T> as std::experimental::simd<int>std::experimental::simd<float> and so on. std::experimental::simd<T> is a vector_pack of type T which is value_type of iterator nums. Contigous elements of the iterator are loaded internally into the vector_pack. The sin and cos functions in the lambda are adapted to arithmetic types and vector_pack types (available in std::experimental namespace).

The lambda used in this code snippet is a compute-bound kernel because of high Arithmetic Intensity due to loop running for 100 steps performing sin and cos operations at each step.

Now, we look into another example using a memory bound kernel (performing SAXPY Operation) with help of transform algorithm.

This code snippet is very similar to the previous with change in lambda and algorithm. Here as well, the arguments to lambda is of two types either arithmetic type (if execution policy is seq or par), or vector_pack type (if the execution policy is simd or par_simd).

The following code snippet describes the usage of algorithms such as count, find. These do not require any lambda and hence vectorization is straightforward just with implementation itself.

This class of algorithms is much easier and are more prone to getting vectorized because of minimum intervention with users i.e no lamda or function is taken in the arguments. Note that we get vectorization benefits only if the iterators passed to algorithm are random access iterators.

Example Implementation

The above code snippet shows implementation for datapar_loop function, which is main vectorization backend helper function for most of the iterative algorithms in hpx. The function call in datapar_loop class can be divided into 3 main steps.

  • First a prefix loop runs the code in sequential fashion by calling each element with function f using the helper function datapar_loop_step::call1. This loop runs until it finds first aligned element.
  • Secondly, the main vectorization loop, where actual vectorization happens with datapar_loop_step::callv function which actually creates a vector_pack then loads the elements from iterator and calls the function f and then stores back the results as below.
  • Finally, the last block i.e post-fix loops handles the elements at the end of array or container that are less than vector_pack size and hence cannot be fit into single vector_pack. So they are handled in sequential fashion similar to pre-fix loop.

Benchmarks

We ran the benchmarks for 2 classes of algorithms. First class being iterative algorithms where each element from the iterator gets mapped using some function. We used for_each and transform algorithms with compute bound and memory bound kernels. For the second class of algorithms, we pick the algorithms which have consists of conditional statements and these can be called as algorithms with simd mask reductions. For this class, we chose count and find algorithms.

Results

The Following figure shows benchmark of for_each algorithm with compute bound kernel i.e Example 1 These benchmarks were run on Intel Xeon Skylake with AVX512AVX512 vector register can hold 16 floating point elements. We can see a 12x speed up with simd policy and over 140x with par_simd.

all

The above image shows the benchmark results graph depeciting speed ups of simdpar and par_simd against seq execution policy. These benchmarks were run on AMD EPYC 7H12 with AVX2AVX2 vector register can hold 8 floating point elements.The array used contains 128 Billion elements with float and double as data types. We can see super-linear scaling for simd speed up in compute bound kernels i.e speed up of simd (10.37) is more than vector_pack size (8) because sin and cos implementations for scalar arithmetic types and vector_pack types are slightly different. We can see a 3 order magnitude of speed up when using par_simd execution policy.

Conclusion

From the examples illustrated and the benchmarks, we can see how easy it is to vectorize the code using simd and par_simd execution policies and gain massive speed ups with very little change in code. Currently adapting algorithms to simd and par_simd policies is still under progress. You can find the list of algorithms adapted to these policies .

GSoC 2021 – Add vectorization to par_unseq implementations of Parallel Algorithms

by Srinivas Yadav

GSoC 2021 Final Report

Abstract

HPX algorithms support data parallelism through explicit vectorization using Vc library and only for a few algorithms like for_each, transform and count, but recently the support for Vc library has been deprecated and has been replaced by std::experimental::simd. In this project I have adapted many algorithms to datapar using new backend std::experimental::simd with two new policies simd and par_simd using the data-parallel types proposed in the experimental namespace. For all the algorithms adapted to datapar, separate tests have been created.

I have created a new github repository namely std-simd-perf for the benchmarks of the algorithms that I have adapted to datapar which have various plots for speed up analysis and roofline model for artificial benchmarks and real world applications.

Pull Requests for HPX Repo

Merged

Open

Other Adapted Algorithms to datapar [code]: 

  • adjacent_difference
  • adjacent_find
  • all_of , any_of, none_of
  • copy
  • count
  • find
  • for_each
  • generate
  • transform

Performance Benchmarks

  • The std-simd-perf repository contains all the benchmarks for simd on artificial algorithms such as for_each, transform, count, find etc.. and on real world examples such as Mandelbrot set.
  • These benchmarks were run on different clusters and have separate branches for each architecture in the repo.
  • Speed up plot for a compute bound kernel using for_each algorithm
  • Speed up plot for a simd reduction based algorithm using count algorithm

Beyond GSoC

  • Adapt #2333 rest of the algorithms to support data parallel.
  • I will be further working with STE||AR GROUP for HPX in other areas as well as this is a great community to learn with great people and expand my knowledge.

Acknowledgements

Special thanks to Hartmut Kaiser, Nikunj Gupta and Auriane R. for all the guidance and help with frequent meetings.