At the end of 2023, the contract was signed to build JUPITER, a new European supercomputer and the first to reach 1 ExaFLOP/s HPL performance. The winning bid was for a machine consisting of two parts (modules): JUPITER Booster, which uses 24 000 NVIDIA Grace-Hopper superchips to reach the ExaFLOP/s; and JUPITER Cluster, the smaller sibling using SiPearl Rhea1 CPUs for more complex programs.1 The system is going to be hosted at JSC. Quite an honor – and we are very excited for it! Currently, hardware is being prepared and tested, and the datacenter is rapidly being built.

Leading up to the procurement of JUPITER were many months of work of numerous people in the institute. About 18 months before the contract signature, some of us met to sketch the process and requirements – long before the Request for Information was even published. We assembled a team of HPC researchers and domain scientists to create a suite of benchmarks containing representative and forward-looking applications, in order to evaluate the vendor proposals for JUPITER using the intended use cases themselves.

The result was a suite of 16 applications, supplemented by 7 synthetic benchmarks. Each benchmark utilized the same execution infrastructure built on JUBE, was thoroughly analyzed, and described in detail in documentation.

JUPITER Benchmarks sorted to the 7 Berkely DwarfsFinally, after the dust had settled a little bit — and before the dust of JUPITER itself became too thick — we sat down and wrote everything up in a paper. This was again some effort, collecting contributions and feedback from all colleagues, covering the various aspects in a mere ten pages, and finding the right balance of content. But, finally, the Application-Driven Exascale: The JUPITER Benchmark Suite paper was accepted for publication through the Technical Program of SC24 and is now available freely. The paper can be read at IEEE and at ACM, or on arXiv with virtually identical content.2 The slides of the presentation I held at SC24 is embedded below; there’s also a recording of my talk.

JUPITER Benchmarks Grand Table with all detailsAlong with the paper, we published each benchmark workflow on GitHub. One meta-repository, FZJ-JSC/jubench, collects the individual benchmark repositories (like FZJ-JSC/jubench-gromacs) as submodules. The repositories are clones of our internal repositories, cleared of some internal data (like older performance data, or notes of the benchmarker). The repositories themselves are all structured similarly, evolved from a structure we first used for the JUSUF procurement some years ago (preprint). Core are an extensive description, which itself has a well-defined, common structure, the JUBE workflow definition files, and sources, which are either included directly in the Git tree as submodules or can be retrieved through collection scripts. If required, input data is also provided.

The paper comes with a reproducibility appendix. Since reproducibility is core to a benchmark suite, and we enforced it along the way, we followed the guidelines of SC24 to apply for additional reproducibility badges. We wrote up an extensive document, largely auto-generated from the existing workflow files and documentation to conform to the template requirements. We were able to achieve two of three possible badges. Given the reviewers’ limited time budget, achieving the final badge proved challenging for a paper encompassing such a wide range of software components, i.e. a whole suite with a number of benchmarks and many data points. We tried!

It was quite some work to herd the benchmark masses and to write up the material with everyone. But I’m very happy how it turned out!

We hope that we could create a suite with long-lasting effects, which will also be picked up by other people from the community. We will update it with smaller changes and bug fixes – until our next procurement, where we hopefully can reuse many components of the suite.3 While JUPITER is being assembled, we are using the applications of the suite to monitor the progress of the installation in a Continuous Benchmark fashion. For that, we created a framework, exaCB; stay tuned about the details.

  1. For SC23 last year, we created this technical overview of JUPITER, which is still known too little… Some people still report only half-right details about the system… 

  2. You find the paper preprint and presentation also in our FZJ library

  3. When the JUPITER Benchmark Suite might re-emerge as the JUniversal Benchmark Suite, or the JUelich Benchmark Suite, or something entirely different. I’m accepting proposals.