Colloquium on Artificial Intelligence Research and Optimization – Spring 22

Every first Wednesday of the month, at 3:00 pm CST, Zoom

Some of today’s most visible and, indeed, remarkable achievements in artificial intelligence (AI) have come from advances in deep learning (DL). The formula for the success of DL has been compute power – artificial neural networks are a decades-old idea, but it was the use of powerful accelerators, mainly GPUs, that truly enabled DL to blossom into its current form.

As significant as the impacts of DL have been, there is a realization that current approaches are merely scratching the surface of what might be possible and that researchers could more rapidly conduct exploratory research on ever larger and more complex systems – if only more compute power could be effectively applied.

There are three emerging trends that, if properly harnessed, could enable such a boost in compute power applied to AI, thereby paving the way for major advances in AI capabilities. 

  • Optimization algorithms based on higher-order derivatives are well-established numerical methods, offering superior convergence characteristics and inherently exposing more opportunities for scalable parallel performance than first-order methods commonly applied today. Despite their potential advantages, these algorithms have not yet found their way into mainstream AI applications, as they require significantly more powerful computational resources and must manage significantly larger amounts of data.
  • High-performance computing (HPC) brings more compute power to bear via parallel programming techniques and large-scale hardware clusters and will be required to satisfy the resource requirements of higher-order methods. That DL is not currently taking advantage of HPC resources is not due to lack of imagination or lack of initiative in the community.  Rather, matching the needs of DL systems with the capabilities of HPC platforms presents significant challenges that can only be met by coordinated advances across multiple disciplines.
  • Hardware architecture advances continue apace, with diversification and specialization increasingly being seen as a critical mechanism for increased performance. Cyberinfrastructure (CI) and runtime systems that insulate users from hardware changes, coupled with tools that support performance evaluation and adaptive optimization of AI applications, are increasingly important to achieving high user productivity, code portability, and application performance.

The colloquium collates experts in the fields of algorithmic theory, artificial intelligence (AI), and high-performance computing (HPC) and aims to transform research in the broader field of AI and Optimization. The first aspects of the colloquium are distributed AI frameworks, e.g. TensorFlow, PyTorch, Horovod, and Phylanx. Here, one challenge is the integration of accelerator devices and support of a wide variety of target architectures, since recent supercomputers are getting more inhomogeneous, having accelerator cards or solely CPUs. The framework should be easy to deploy and maintain and provide good portability and productivity. Here, some abstractions and a unified API to hide the zoo of accelerator devices from the users is important.

The second aspect are higher-order algorithms, e.g. second order methods or Bayesian optimization. These methods might result in a higher accuracy, but are more computationally intense. We will look into the theoretical and computational aspects of these methods.

This will be the second term for our Colloquium. For details from the inaugural Colloquium series, including speaker information and links to presentations, click here.

______________________________________________________________________________

Confirmed Speakers

02/02/2022George Em KarniadakisBrown University
03/02/2022Youngsoo ChoiIllinois
04/06/2022Marta D’EliaSandia National Laboratories

Registration

Registration for the colloquium is free. Please complete your registration here: registration form

Local organizers

  • Patrick Diehl
  • Katie Bailey
  • Hartmut Kaiser
  • Mayank Tyagi

For questions or comments regarding the colloquium, please contact Katie Bailey.

Talks

Speaker:            Dr. George Em Karniadakis, Brown University

Date:                  Wed 2/2 @ 3 pm CST

Title:             Approximating functions, functionals and operators with neural networks for diverse applications

Abstract:           We will review physics-informed neural network and summarize available extensions for applications in computational mechanics and beyond. We will also introduce new NNs that learn functionals and nonlinear operators from functions and corresponding responses for system identification. The universal approximation theorem of operators is suggestive of the potential of NNs in learning from scattered data any continuous operator or complex system. We first generalize the theorem to deep neural networks, and subsequently we apply it to design a new composite NN with small generalization error, the deep operator network (DeepONet), consisting of a NN for encoding the discrete input function space (branch net) and another NN for encoding the domain of the output functions (trunk net). We demonstrate that DeepONet can learn various explicit operators, e.g., integrals, Laplace transforms and fractional Laplacians, as well as implicit operators that represent deterministic and stochastic differential equations. More generally, DeepOnet can learn multiscale operators spanning across many scales and trained by diverse sources of data simultaneously.

Bio:           George Karniadakis is from Crete. He received his S.M. and Ph.D. from Massachusetts Institute of Technology (1984/87). He was appointed Lecturer in the Department of Mechanical Engineering at MIT and subsequently he joined the Center for Turbulence Research at Stanford / Nasa Ames. He joined Princeton University as Assistant Professor in the Department of Mechanical and Aerospace Engineering and as Associate Faculty in the Program of Applied and Computational Mathematics. He was a Visiting Professor at Caltech in 1993 in the Aeronautics Department and joined Brown University as Associate Professor of Applied Mathematics in the Center for Fluid Mechanics in 1994. After becoming a full professor in 1996, he continued to be a Visiting Professor and Senior Lecturer of Ocean/Mechanical Engineering at MIT. He is an AAAS Fellow (2018-), Fellow of the Society for Industrial and Applied Mathematics (SIAM, 2010-), Fellow of the American Physical Society (APS, 2004-), Fellow of the American Society of Mechanical Engineers (ASME, 2003-) and Associate Fellow of the American Institute of Aeronautics and Astronautics (AIAA, 2006-). He received the SIAM/ACM Prize on Computational Science & Engineering (2021), the Alexander von Humboldt award in 2017, the SIAM Ralf E Kleinman award (2015), the J. Tinsley Oden Medal (2013), and the CFD award (2007) by the US Association in Computational Mechanics. His h-index is 115 and he has been cited over 61,000 times.

______________________________________________________________________________

Speaker:           Dr. Youngsoo Choi, Illinois

Date:                  Wed 3/2 @ 3 pm CST

Title:       Physics-constrained data-driven physical simulations, using machine learning.

Abstract:           A data-driven model can be built to accurately accelerate computationally expensive physical simulations, which is essential in multi-query problems, such as inverse problem, uncertainty quantification, design optimization, and optimal control. In this talk, two types of data-driven model order reduction techniques will be discussed, i.e., the black-box approach that incorporates only data and the physics-constrained approach that incorporates the first principle as well as data. The advantages and disadvantages of each method will be discussed. Several recent developments of generalizable and robust data-driven physics-constrained reduced order models will be demonstrated for various physical simulations as well. For example, a hyper-reduced time-windowing reduced order model overcomes the difficulty of advection-dominated shock propagation phenomenon, achieving a speed-up of O(20~100) with a relative error much less than 1% for Lagrangian hydrodynamics problems, such as 3D Sedov blast problem, 3D triple point problem, 3D Taylor–Green vortex problem, 2D Gresho vortex problem, and 2D Rayleigh–Taylor instability problem. The nonlinear manifold reduced order model also overcomes the challenges posed by the problems with Kolmogorov’s width decaying slowly by representing the solution field with a compact neural network decoder, i.e., nonlinear manifold. The space–time reduced order model accelerates a large-scale particle Boltzmann transport simulation by a factor of 2,700 with a relative error less than 1%. Furthermore, successful application of a physics-constrained data-driven method for meta-material lattice–structure design optimization problems will be presented. Finally, the library for reduced order models, i.e., libROM (https://www.librom.net), and its webpage and several YouTube tutorial videos will be introduced. They are useful for education as well as research purpose.

Bio:                    Youngsoo is a computational math scientist in CASC under Computing directorate at LLNL. He is currently leading data-driven reduced order model development team for various physical simulations, with whom he developed the open source codes, libROM (https://www.librom.net) and LaghosROM (https://github.com/CEED/Laghos/tree/rom/rom). libROM is a library for reduced order models and LaghosROM implements reduced order models for Lagrangian hydrodynamics (https://authors.elsevier.com/c/1e3CuAQEIviQh). He has earned his undergraduate degree for Civil and Environmental Engineering from Cornell University with applied mathematics as minor and his PhD degree for Computational and Mathematical Engineering from Stanford University. He was a postdoc in Sandia National Laboratory and Stanford University prior to joining LLNL in 2017.

______________________________________________________________________________

Speaker:           Dr. Marta D’Elia, Sandia

Date:       Wed 04/06 @ 3:00 pm CDT    

Title:            

Abstract:          

Bio:                    

______________________________________________________________________________