TBAA 2020: SC Panel

We are pleased to announce that our submission pan107s1 TBAA: Task-Based Algorithms and Applications for the Panels program has been accepted for SC20 panels!

Abstract

The new challenges posted by Exascale system architectures have resulted in difficult achieving a desired scalability using traditional distributed-memory runtimes. Task-based programming models show promise in addressing these challenges, providing application developers with a productive and performant approach to programming on next generation systems. Empirical studies show that task-based models can overcome load-balancing issues that are inherent to traditional distributed-memory runtimes, and that task-based runtimes perform comparably to those systems when balanced. This panel is designed to explore the advantages of task-based programming models on modern and future HPC systems from an industry, university, and national lab perspective. It aims at gathering application experts and proponents of these models to present concrete and practical examples of using task-based runtimes to overcome the challenges posed by Exascale system architectures.

Schedule

Wednesday, 18 November 2020, 3:00 pm – 4:30 pm

TopicDuration
Introduction of the panelists7 Minutes
Introduction to AMT by Adelstein Lelbach5 Minutes
State of the art: Charm++, Julia, and HPX15 Minutes
Panel’s Chosen Questions48 Minutes
Q & A from audience15 Minutes

Panel’s Chosen Questions

We propose to discuss in the panel the following questions. The outcome of the discussion should enhance the leverage of AMT run time systems in the task-based applications.

  1. For what class of applications is an AMT paradigm the best solution for achieving the scalable execution?
  2. How does the C++ language standard impact efficiency and complexity of programming AMT systems?
  3. What are the differences between HPX, Julia, and Charm++ AMT paradigms and how can these differences affect the parallel performances?
  4. Have these HPX, Julia, and Charm++ AMT paradigms performances been evaluated against scientific applications, AI, or any other frameworks? If so, how did those frameworks get benefit
    using these AMT models compared to the traditional runtime systems?
  5. How does the supercomputing community benefit from using AMT on modern and future supercomputers?
  6. What hardware features are required in the next-generation processors to support AMT?

Moderator

  • Patrick Diehl, Louisiana State University, Center for Computation and Technology

Panelists

  • Laxmikant V. Kale, University of Illinois
  • Irina P. Demeshko, Los Alamos National Laboratory
  • Bryce Adelstein Lelbach, NVIDIA
  • Hartmut Kaiser, Louisiana State University, Center for Computation and Technology
  • Zahra Khatami, Oracle
  • Keno Fischer, Julia Computing Inc
  • Alice Koniges, University of Hawaii