HPX V0.9.8: Release Notes

The STE||AR Group is proud to announce a new formal release of HPX (V0.9.8) — a C++ runtime system for parallel and distributed applications of any scale. While this release is mainly based on contributions made by partners of the STE||AR Group from the Louisiana State University Baton Rouge (LA, USA) and the Friedrich-Alexander University Erlangen (Germany), it would not have been possible without the help of many people from all over the world. Thanks to everybody involved!

This release coincides with the 6th anniversary of the start of the HPX project. Happy birthday, HPX! It also happens right after the 10,000th commit to the HPX repository, with more than 2700 commits just during the last year. More than 60 people have contributed to HPX since the project started, more than 30 people contributed during the last 12 month alone. We are very proud of reaching this point which demonstrates our long term commitment to the project and the and high impact HPX has on the community.

What’s New?

With the changes below, HPX is once again leading the charge of a whole new era of computation. By intrinsically breaking down and synchronizing the work to be done, HPX insures that application developers will no longer have to fret about where a segment of code executes. That allows coders to focus their time and energy to understanding the data dependencies of their algorithms and thereby the core obstacles to an efficient code. Here are some of the advantages of using HPX:

  • HPX is solidly rooted in a sophisticated theoretical execution model – ParalleX
  • HPX exposes an API fully conforming to the C++11 and the draft C++14 standards, extended and applied to distributed computing. Everything programmers know about the concurrency primitives of the standard C++ library is still valid in the context of HPX.
  • It provides a competitive, high performance implementation of modern, future-proof ideas, ensuring a smooth migration path from today’s mainstream techniques
  • There is no need for the programmer to worry about lower level parallelization paradigms like threads or message passing; no need to understand pthreads, MPI, OpenMP, or Windows threads, etc.
  • There is no need to think about different types of parallelism such as tasks, pipelines, or fork-join, task or data parallelism.
  • The same source of your program compiles and runs on Linux, BlueGene/Q, Mac OS X, Windows, and Android.
  • The same code runs on shared memory multi-core systems and supercomputers, on handheld devices and Intel Xeon Phi ® accelerators, or on a heterogeneous mix of those.

In this release we have made several significant changes:

  • A major API breaking change for this release was introduced by implementing hpx::future and hpx::shared_future fully in conformance with the C++11 Standard. While hpx::shared_future is new and will not create any compatibility problems, we revised the interface and implementation of the existing hpx::future. For more details please see the mailing list archive. To avoid any incompatibilities for existing code we named the type which implements the std::future interface as hpx::unique_future. For the next release this will be renamed to hpx::future, making it fully conform to C++11 Standard.
  • A large part of the code base of HPX has been refactored and partially re-implemented. The main changes were related to
    • The threading subsystem: these changes significantly reduce the amount of overheads caused by the schedulers, improve the modularity of the code base, and extend the variety of available scheduling algorithms.
    • The parcel subsystem: these changes improve the performance of the HPX networking layer, modularize the structure of the parcelports, and simplify the creation of new parcel-ports for other underlying networking libraries.
    • The API subsystem: these changes improved the conformance of the API to C++11 Standard, extend and unify the available API functionality, and decrease the overheads created by various elements of the API.
    • The robustness of the component loading subsystem has been improved significantly, allowing to more portably and more reliably register the components needed by an application as startup. This additionally speeds up general application initialization.
  • We added new API functionality like hpx::migrate and hpx::copy_component which are the basic building blocks necessary for implementing higher level abstractions for system-wide load balancing, runtime-adaptive resource management, and object-oriented check-pointing and state-management.
  • We removed the use of C++11 move emulation (using Boost.Move), replacing it with native C++11 rvalue references. This is the first step towards using more and more native C++11 facilities which we plan to introduce in the future.
  • We improved the reference counting scheme used by HPX which helps managing distributed objects and memory. This improves the overall stability of HPX and further simplifies writing real world applications.
  • The minimal Boost version required to use HPX is now V1.49.0.
  • This release coincides with the first release of HPXPI (V0.1.0), the first implementation of the XPI specification. Please see the HPXPI release notes for more information.

For an extensive list of closed tickets please see the corresponding documentation page.

How to download

  • HPX V0.9.8:
    File MD5 Hash
    zip (4.5M) 38576fdbae99ef291aacdfbd32231ff0
    gz (3.1M) 600425b3c7ef758c553af768cf4e4318
    bz2 (2.5M) 44799c7d6888c4b435bab70d6a4ffc5d
    7z (2.0M) e552a22dca639d647b94353bc2984e8c

If you would like to access the code directly, please clone the git repository here. Please refer to the file README.rst in the root directory inside the downloaded archives or to the documentation for more information about how to get started.

Bug reports via email (hpx-users@cct.lsu.edu), our ticket submission system, or directly through our IRC channel (#ste||ar at Freenode) are welcome.

The development of HPX is supported by U.S. National Science Foundation through awards 1117470 (APX) and 1240655 (STAR), the U.S. Department of Energy (DoE) through the award DE-SC0008714 (XPRESS), and the Bavarian Research Foundation (Bayerische Forschungsstiftung, Germany) through the grant AZ-987-11. In addition we utilize computer resources through allocations from LSU HPC, XSEDE, and the Gauss Center for Supercomputing.