Working Groups
Projects & Activities
The Applications and Benchmarks working groups has its origin in the Applications working group that has existed since the start of the ADAC Institute in 2016.
The primary goal of the group is to share experiences in developing or porting applications and benchmarks to accelerated computer systems using the full range of programming tools and environments. The group meets once per month on-line for a seminar or to have discussions on relevant topics. Members collaborate in joint hands-on projects through the discretionary user programs at the ADAC institutions that deploy large systems. To share expertise and avoid duplication of effort, the group maintains a sizable list of scientific applications and benchmarks that run or have run on our accelerated computer systems with technical details such as language, communication and programming models, and library dependencies, and the institutional points of contact. The group is currently led by Sebastian Keller at CSCS, Tjerk Straatsma at ORNL, and Rio Yokota at Institute of Science Tokyo.
Applications & Benchmarks Working Group Monthly Seminars
Date | Host | Attn | Speaker | Title |
Oct 23, 2020 | CSC | 20 | Georgios Markomanolis | Towards a Benchmark Methodology |
CSCS | Anton Kozhevnikov | HPC libraries for the electronic structure domain | ||
Nov 27, 2020 | JFZ | 27 | Andreas Herten | Enabling Applications for one of Europe’s Largest GPU Systems: The JUWELS Booster Early Access Program |
Jaro Hokkanen | Performance portability on HPC accelerator architectures with modern techniques: The ParFlow blueprint | |||
Jan 21, 2021 | AIST | 16 | Peng Chen | High-resolution Image Reconstruction on ABCI supercomputer |
Hiroki Kanazashi | Optimizing Data Allocations to Optane DCPMM and DRAM for Billion-Scale Graphs | |||
Feb 26, 2021 | RIKEN | 19 | Naoki Yoshioka | Simulation of quantum circuits in classical supercomputers |
Makoto Tsubokura | Viral Droplet/Aerosol Dispersion Simulation on the Supercomputer “Fugaku” and Fight Back against COVID-19 | |||
Mar 26, 2021 | ORNL | 20 | Norbert Podhorszki | Codesign of I/O for Exascale data: Enhancing ADIOS for extreme-scale data movement |
Scott Klasky | MGARD: Hierarchical Compression of Scientific Data | |||
Apr 22, 2021 | NCI | 10 | Rui Yang | Jupyter-Dask based python framework |
Matthew Downton | Experience scaling bioinformatics pipelines at NCI | |||
Jun 25, 2021 | Univ Tokyo | 22 | Yohei Miki | Gravitational octree code performance on NVIDIA A100 |
Naoyuki Onodera | GPU acceleration of tracer dispersion simulation using the locally mesh-refined lattice Boltzmann method | |||
Jul 23, 2021 | ANL | 24 | Ye Luo | Enabling Performance Portable QMCPACK on Exascale Supercomputers |
JaeHyuk Kwack | Porting GAMESS RI-MP2 mini-app from CPU to GPU with roofline performance analysis | |||
Aug 26, 2021 | LLNL | 7 | Sam Jacobs Brian Van Essen | A Scalable Deep Learning Toolkit for Leadership-class Large Scale Scientific Machine |
Stephanie Brink Olga Pearce | Pinpointing Performance Bottlenecks with Hatchet | |||
Oct 22, 2021 | CEA | 13 | Luigi Genovese | Broadening the scope of large-scale DFT calculations |
Dec 3, 2021 | KTH | 22 | Niclas Jansson | Refactoring legacy Fortran applications to leverage modern heterogeneous architectures in extreme-scale CFD |
Szilárd Páll | Bringing GROMACS to exascale-generation heterogeneous architectures | |||
Feb 24, 2022 | Tokyo Tech | |||
Mar 25, 2022 | RIKEN | 8 | Takami Tohyama | Applications of time-dependent density-matrix renormalization group to strongly correlated electron systems |
| Kenta Sueki | Ensemble Kalman Filter-Based Parameter Estimation for Atmospheric Models: Parameter Uncertainty for Reliable and Efficient Estimation | ||
Apr 22, 2022 | LLNL | 19 | Brian Van Essen | Second-order Optimization with Sub-graph Parallelism on LBANN for COVID-19 Small Molecule Drug Design |
May 26, 2022 | AIST | Erik Deumens | High-Performance Computing in the World of Data and Artificial Intelligence | |
Ricardo Macedo | Building User-level Storage Data Planes with PAIO | |||
Jun 24, 2022 | Univ Tokyo | 12 | Kohei Fujita | GPU porting of implicit solver with Green’s function-based neural networks |
Yohei Miki | Porting N-body code to AMD GPU and performance evaluation | |||
Jul 22, 2022 | JFZ | 18 | Dennis Willsch | Jülich Universal Quantum Computer Simulator |
Bartosz Kostrzewa | Enabling state-of-the-art lattice QCD simulations using a legacy code on life support and the QUDA library | |||
Sep 1, 2022 | ORNL | 20 | Bronson Messer | An Introduction to the Frontier: Building and Using the World’s First Exascale Computer |
Sep 23, 2022 | CSCS | Sebastian Keller | Cornerstone Octree: domain decomposition on GPUs for particle-based simulations | |
Oct 28, 2022 | CSC | 12 | Martti Louhivuori, Cristian-Vasile Achim Jaro Hokkanen | LUMI GPU porting of three scientific applications |
Dec 1, 2022 | NCI | 8 | Ben Evans | Data Science and AI/ML activities at ANU |
Rui Yang | Data science and AI/ML software | |||
Maruf Ahmed | FourCastNet, IceNet, and ImageNet | |||
Jan 27, 2023 | Tokyo Tech | Rio Yokota | Training large vision+language models as an INCITE project | |
Qianxiang Ma | Scalable Linear Time Dense Direct Solver for 3-D Problems Without Trailing Sub-Matrix Dependencies | |||
Mar 23, 2023 | ORNL | 11 | Woong Shin | Post-Exascale HPC Energy Efficiency – Increasing Energy Awareness |
Apr 28, 2023 | RIKEN | Andrès Xavier Rubio Proano | Performance Metrics for Managing Heterogeneous Memory in HPC Applications | |
Eisuke Kawashima | Development of Massively Parallelized Quantum Chemical Software NTChem on Fugaku | |||
May 26, 2023 | ANL | 16 | Justin Wozniak | Deep Learning Workflows with CANDLE |
Jun 22, 2023 | Univ Tokyo | 11 | Kazuya Yamazaki | Porting an atmospheric model to GPUs using OpenACC |
Ryohei Kobayashi | Accelerating astrophysics simulation with GPUs and FPGAs | |||
Jul 28, 2023 | CSCS | Raffaele Solcà Mikael Simberg | Experiences with C++ std::execution: DLA-Future | |
Aug 25, 2023 | CSC | Martti Louhivuori | Header Only Porting: a light-weight header-only library for CUDA/HIP porting | |
Oct 27, 2023 | LLNL | 8 | Tom Stitt | Experiences porting the LLNL multiphysics code Marbl to AMD GPUs on El Capitan Early Access Systems |
Aaron Skinner | A Matrix-Free Saddle-Point Diffusion Solver for GPU Performance | |||
Jan 12, 2024 | AIST | 14 | Yusuke Tanimura | Introduction of ABCI |
Ryosuke Nakamura | Construction of “Digital twin” on ABCI | |||
Toru Kouyama | Large-scale database of satellite imagery and its web-based data sharing service on ABCI | |||
Jan 26, 2024 | KTH | 25 | Andrey Alekseenko | GROMACS: Using SYCL to target AMD GPUs |
Szilárd Páll | Heterogeneous multi-GPU task scheduling in GROMACS using CUDA graphs | |||
Feb 9, 2024 | ORNL | 40 | Ada Sedova Mathieu Taillefumier Oscar Hernandez | Understanding correctness, reproducibility and portability for applications in the new HPC+AI programming model |
Mar 22, 2024 | FZJ | Mathis Bode | Towards Exascale Computing with High-Order Spectral Element Methods | |
Jayesh Badwaik | Continuous Benchmarking with ExaCB | |||
Apr 26, 2024 | CEA | 27 | Maxime Delorme Arnaud Durocher | Dyablo : A simulation code for astrophysics fluids with adaptive mesh refinement in the exascale era |
May 23, 2024 | NCI | Discussion on collaborative projects | ||
July 26, 2024 | Tokyo Tech | Ryo Ohnishi | Integrated technology of simulation and ML: Super-Resolution Simulation for Real-Time Prediction of Urban Micro-Meteorology | |
Jiamian Huang | Effect of Regularization in Fast Multipole Method for Molecular Simulation | |||
Aug 23, 2024 | C-DAC | Vivek Gavane | Integrated Computing Environment (ICE) for genomics and molecular modelling applications | |
Upasana Dutta | Disaster simulations and management – Early warning system for flood prediction in the Large River Basins Using FOSS Tool and HPC | |||
Sep 19, 2024 | RIKEN | |||
Nov 1, 2024 | CSCS | 35 | Hannes Vogt Alexandru Calotoiu Rico Häuselmann | Introduction to GT4Py GT4Py with DaCe |
Dec 10, 2024 | ANL | Youngjun Lee | ORCHA: A Code Generation and Synthesis-Based Orchestration System for a Multiphysics Simulation Software on Heterogeneous Systems | |
Akash Dhruv | CodeScribe: A GenAI Helper Tool for Leveraging Large Language Models for Code Translation and Software Development in Scientific Computing | |||
Feb 27, 2025 | ORNL | Aditya Kashi | Mixed precision numerical methods for science – why and how? | |
April 25, 2025 | LBNL | 30 | Bhargav Sriram Siddani | The Why and How of interfacing AMReX codes with Python |
Jean Luca Bez | Drishti: I/O Insights and Recommendations for All |
Applications Working Group Projects
GronOR: Non-Orthogonal Configuration Interaction
Tjerk P. Straatsma and Coen de Graaf
GronOR is a non-orthogonal configuration interaction (NOCI) application, developed by the University of Groningen, Oak Ridge National Laboratory and University Rovira i Virgili. The target scientific application is the description of the electronic structure of molecular assemblies in terms of basis functions that can be interpreted as a particular combination of molecular electronic states. The electronic states obtained in this basis can be interpreted directly in terms of molecular states, and with appropriate unitary transformations, the canonical molecular orbitals can be transformed into a description resembling the valence bond picture for the description of the electronic structure of molecules in terms of Lewis structures. The method would can also allow an accurate description of processes that occur locally, like excitation of one molecule in the nanostructure. The basic methodology consists of generation of spin adapted, antisymmetrized, combinations (SAACs) of molecular wavefunctions, followed by a non-orthogonal configuration interaction (NOCI) calculation using these SAACs as many-electron basis functions (MEBFs). The NOCI wavefunction of a cluster/ensemble of molecules is written as a linear combination of MEBFs, each formed as a SAAC of the wavefunctions for a particular state for each molecule in the ensemble. Using fully relaxed molecular electronic states and correlated molecular wavefunctions has the advantage that orbital relaxation and local correlation effects can be properly included in the description of the locally excited states, while avoiding lengthy CI expansions. The implementation of the NOCI method in GronOR is interfaced with OpenMolcas to obtain the CASSCF CI vector, the state specific CASSCF orbitals, and the required two-electron integrals. The evaluation of the Hamiltonian matrix elements involves many contributions in the form of determinant pairs that can be calculated independently. For the massively parallel implementation of the algorithm we adopted a task-based approach with a master/worker model to achieve load balancing and fault-resilient execution. (http://www.gronor.org)
FMM & MSM: Algorithms for Long-Range Calculations in Classical Molecular Dynamics
Rio Yokota
In classical molecular dynamics simulations, the electrostatic (Coulomb) potential induces a global interaction between atoms. When calculated directly, this requires a computational cost of O(N^2) for N atoms. A common fast algorithm for calculating electrostatic forces is the particle-mesh Ewald (PME) method, which derives its speed from the efficiency of FFT for problems with high uniformity. The recent trend in hardware architectures with increasing parallelism poses a challenge for these FFT-based algorithms. Therefore, alternative algorithms such as the fast multipole method (FMM) and multilevel summation (MSM) are being considered. If we are to transition to such alternatives, a common interface between these alternatives must be developed. The developers of NAMD and GROMACS are interested in this approach.
The ADAC Portability, Sustainability, and Integration Working Group holds monthly meetings to discuss a broad range of topics related to research and practice. These topics include performance-portable software and libraries, software portfolio management, and policies and strategies for software deployment across HPC centers. Current topics of discussion center on:
● OpenMP verification suites (led by Swaroop Pophale at ORNL)
● Continuous Integration Effort (led by Ben Cumming, CSCS)
● SYCL programming model (led by Deepika H V at C-DAC)
● Software usage survey (led by Kento Sato at RIKEN)
● Performance Portable Programming Software Research for emerging language and hardware (led by Roberto Gioiosa, PNNL)
At the ADAC14 meeting, we discussed the software product portfolio across ADAC supercomputing centers, focusing on integrating Artificial Intelligence (AI) and Machine Learning (ML) into our future technology stack.
At the ADAC15, we continued addressing the management of AI and ML software and model data, which exhibits unique portability and sustainability challenges compared to the traditional HPC software stack.
The insights gained from these discussions will significantly advance computational science by democratizing software, data, and computation for scientific users.
Portability, Sustainability & Integration Working Group Monthly Seminars
Date | Host | Attn | Speaker | Title |
Aug 11, 2023 | LLNL | Todd Gamblin | Software sustainability activities in US | |
Sep 8, 2023 | NCI | Joseph John | Dynamic Multi-GPU Load Balancing in a Task-Based Dataflow Programming Model | |
LLNL | Todd Gamblin | Is Software Foundation Model Feasible for Scientific Software Sustainability? | ||
Oct 12, 2023 | ORNL | Swaroop Pophale | OpenMP Validation and Verification Suites | |
Dec 8, 2023 | Ben Cumming | |||
Jan 12, 2024 | C-DAC | Deepika H V | Impact of HPC-IDE for the HPC community | |
Feb 8, 2024 | RIKEN | Kento Sato | Feasibility study on NextFugaku system software and the survey of software usage on Fugaku | |
Feb 9, 2024 | ORNL | Oscar Hernandez | Panel Discussion “HPC+AI Application Correctness” |
The Quantum Computing WG of ADAC was established in 2023, following discussions during the ADAC12 meeting in Japan. Recognizing the growing interest in quantum computing and its potential role in traditional high-performance computing (HPC) settings, the WG was formed to actively explore and discuss these possibilities.
Quantum computing, though still an emerging technology, holds immense potential to revolutionize HPC in specific application areas. Exploring and enabling the uptake of the new computing paradigm has thus become an important activity for many computing centers around the world. The ADAC community boasts a significant talent pool for setting up quantum-accelerated compute infrastructure. Leveraging our strong background in traditional HPC, our combined efforts can significantly catalyze the early adoption of HPC+QC.
Balancing HPC, artificial intelligence, and quantum computing to tackle the grand challenges of scientific computing is a complex task. The Quantum Computing WG provides a forum for the exchange of ideas, concrete implementations, and case-studies. Our goal is to enhance the overall understanding of quantum computing’s utility, particularly from an HPC perspective.
The WG organizes a series of “mini-talks” on various aspects of HPC+QC, covering topics such as general challenges, compilation, supporting software, and the efficient integration of AI/ML. Recently, our efforts have focused on drafting a white paper on the role of quantum computing, aimed at a general HPC audience.
We encourage all ADAC members to attend our monthly WG meetings. While several quantum computing discussion forums exit, the ADAC Quantum Computing WG stands out due to its solid HPC foundation. Through this, the working principles of the WG are based on a no-nonsense, non-hype approach to quantum computing, aiming for practical quantum advantage for the end-users of our respective HPC services. Welcome aboard!
The Federated Learning and AI Working Group is one of ADAC’s newest initiatives, launched during the ADAC12 meeting. Its primary aim is to foster discussions and facilitate the exchange of experiences in developing, deploying, and leveraging AI to drive transformative scientific innovations on world-class, large-scale computing systems.
The group convenes periodically through virtual sessions, hosting seminars, discussions, and collaborative exchanges on pertinent topics. Members are actively working toward joint hands-on projects, utilizing the large-scale system deployments available at various institutions within the ADAC network.
The working group’s activities are centered around three key focus areas:
Federated Learning: A significant initiative involves silo-based federated learning, enabling models to be trained across geographically distributed resources within ADAC member organizations while ensuring data privacy and integrity.
AI-Assisted Synthetic Data Generation: The group is exploring this emerging field to address critical challenges such as data scarcity, privacy concerns, and bias reduction, highlighting its growing importance across domains.
AI Benchmarking: The group aims to develop benchmarks tailored to real-world scientific challenges, such as climate modeling, drug discovery, genomics, and materials science. These benchmarks focus on capturing the complexity, uncertainty, and constraints of domain-specific problems, moving beyond generic tasks to address issues with direct societal and scientific relevance.
This international collaboration has been made possible through the strong connections within the ADAC community. The group is co-led by Jason Haga (AIST) and Feiyi Wang (ORNL), who guide its efforts to advance AI innovation and application.
Federated Learning & AI Working Group Monthly Seminars
Date |
Host |
Attn |
Speaker |
Title |
Oct 25, 2023 |
ORNL |
Ryan Prout |
Experiences Building a Federated Learning Platform for HPC facilities |
|
Feb 14, 2024 |
ANL |
Ravi Madduri |
System Management
This working group is promoting respective training information regarding ADAC members respective programs and best practices around training & outreach. In the long term, TOWD would create an “ADAC network” by encouraging young students to work in HPC/AI/QC and benefit from ADAC members’ exchange and programs.
Training, Outreach & Workforce Development Working group is led by France Boillod-Cerneux of CEA, Kengo Nakajima of the University of Tokyo, and Maria Grazia Giuffreda of CSCS.