The STE||AR Group is proud to announce the availability of HPX V0.9.9! HPX is a general purpose C++ runtime system for parallel and distributed applications of any scale. This release includes over 1,500 commits and 200 bug fixes! This release would not be possible with the support of STE||AR Group members around the world. Thank you to everyone who downloaded, commented, and contributed to the code over the past 6 months! All this effort was dedicated by many developers from all over the world. In addition to the established partners, we welcomed 7 new developers contributing time and code to HPX since the last release.
What’s New?
With the changes below, HPX is once again leading the charge of a whole new era of computation. We strongly believe that new approaches to managing parallelism is required in order for us to better utilize the capabilities of today’s and tomorrow’s hardware. HPX implements many ideas targetting improved parallel scalability and performance of parallel applications.
HPX ensures that application developers will no longer have to fret about where a segment of code executes. It exposes a uniform programming interface strongly aligned with the C++11/14 Standards, extended and applied to distributed computing. That allows coders to focus their time and energy to understanding the data dependencies of their algorithms and thereby the core obstacles to an efficient code. Here are some more points why HPX is so special:
- It provides a competitive, high performance implementation of modern, future-proof ideas, ensuring a smooth migration path from today’s mainstream techniques
- There is no need for the programmer to worry about lower level parallelization paradigms like threads or message passing; no need to understand pthreads, MPI, OpenMP, or Windows threads, etc.
- There is no need to think about different types of parallelism such as tasks, pipelines, or fork-join, task or data parallelism.
- The same source of your program compiles and runs on Linux, BlueGene/Q, Mac OS X, Windows, and Android.
- The same code runs on shared memory multi-core systems and supercomputers, on handheld devices and Intel Xeon Phi ® accelerators, or on a heterogeneous mix of those.
The main things we fixed for this release are:
- Completing the refactoring of hpx::future to be properly C++11 standards conforming.
- Overhauling our build system to support newer CMake features to make it more robust and more portable.
- Implementing a large part of the parallel algorithms proposed by C++ Technical Specifications N4104, N4088, and N4107.
- Adding examples such as the 1D Stencil and the Matrix Transpose series.
- Improved the existing documentation
- Remodeling our testing infrastructure which will allow us to quickly discover, diagnose, and fix bugs that arise during development
We spent a lot of time and effort to improve the overall performance of HPX and to develop new implementation techniques which are helping to increase the scalability and parallel efficiency of application written using it.
How to Download:
- HPX V0.9.9:
File | MD5 Hash |
---|---|
zip (4.9M) | 61c3c9e250de24003e7ffb8cdcc1c836 |
gz (3.4M) | 0a716ebf19ef64f3073754f2e7040dd8 |
bz2 (2.7M) | 2078fa2a20b90d96907e17815da13bb8 |
7z (2.3M) | 3efc8847ad128cb5eb50121b94df3d52 |
If you would like to access the code directly, please clone the git repository here. Please refer to the file README.rst in the root directory inside the downloaded archives or to the documentation for more information about how to get started.
Bug reports via email (hpx-users@cct.lsu.edu), our ticket submission system, or directly through our IRC channel (#ste||ar at Freenode) are welcome.
The development of HPX is supported by U.S. National Science Foundation through awards 1117470 (APX), 1240655 (STAR), 1447831 (PXFS), and 1339782 (STORM), the U.S. Department of Energy (DoE) through the award DE-SC0008714 (XPRESS), and the Bavarian Research Foundation (Bayerische Forschungsstiftung, Germany) through the grant AZ-987-11. In addition we utilize computer resources through allocations from LSU HPC, XSEDE, and the Gauss Center for Supercomputing.