HPX V0.9.6: Release Notes

We are proud to announce the sixth formal release of HPX (V0.9.6). We have had roughly 1200 commits since the last release and we have closed around 140 tickets (bugs, feature requests, etc.). This is the again a very large amount of closed tickets for an HPX release. Please report any issues you encounter through our issue tracker.

What’s New?

With the changes below, HPX is leading the charge of a whole new era of computation.  By intrinsically breaking down and synchronizing the work to be done, HPX insures that application developers will no longer have to fret about where a segment of code executes.  HPX allows coders to focus their time and energy to understanding the data dependencies of their algorithms and thereby the core obstacles to an efficient code.  Here are some of the advantages of using HPX:

  • HPX exposes an API equivalent to the facilities as standardized by C++11/14 extended to distributed computing. Everything programmers know about primitives in the standard C++ library is still valid in the context of HPX.
  • There is no need for the programmer to worry about lower level parallelization paradigms like threads or message passing; no need to understand pthreads, MPI, OpenMP, or Windows threads, etc.
  • There is no need to think about different types of parallelism such as tasks, pipelines, or fork-join, task or data parallelism.
  • The same source of your program compiles and runs on Linux, MacOS, Windows, and Android.
  • The same code runs on shared memory multi-core systems and supercomputers, on handheld devices and Xeon-Phi accelerators, or a heterogeneous mix of those.

In this release we have made several significant changes:

  • Consolidated API to be aligned with the C++11 (and the future C++14) Standard
  • Implemented a distributed version of our Active Global Address Space (AGAS)
  • Ported HPX to the Xeon-Phi device
  • Added support for the SLURM scheduling system
  • Improved the performance counter framework
  • Added parcel (message) compression and parcel coalescing systems
  • Allow different scheduling polices for different parts of code with experimental executors API
  • Added experimental security support on the locality level
  • Created a native transport layer on top of Infiniband networks
  • Created a native transport layer on top of low level MPI functions
  • Added an experimental tuple-space object

Where to download

File MD5 Hash
zip (5.7M) 67969ebc9e14232cf4238bf70fe11455
gz (4.4M) f068e1747771fe7a75a9387b227777f1
bz2 (2.9M) de7c9461bad42ccd3ee05920ca53504f
7z (2.5M) 7233b104ea2849b8d3f82defb6e8fe2e

If you would like to access the code, please clone the git repository here: http://github.com/STEllAR-GROUP/hpx. Please refer to the file README.rst in the root directory inside the downloaded archives or to the documentation for more information about how to get started.

Bug reports via email (hpx-users@cct.lsu.edu) are welcome.