
![]()
Five finalists for the esteemed high-performance computing award have achieved breakthroughs in climate modeling, fluid simulation and more with the Alps, JUPITER and Perlmutter supercomputers.
Five finalists for the Gordon Bell Prize for outstanding achievements in high-performance computing (HPC) are using NVIDIA-powered supercomputers for their critical work in climate modeling, materials science, fluid simulation, geophysics and electronic design.
Announced today at SC25, the finalists’ projects are driving AI and HPC for science using physics simulation, high-precision math and other advanced supercomputing techniques, accelerating breakthroughs across weather forecasting, semiconductor design, space exploration and other fields. Their results are open and accessible on ArXiv.
The supercomputers powering their work include:
- Alps — hosted at the Swiss National Supercomputing Centre (CSCS) and powered by more than 10,000 NVIDIA GH200 Grace Hopper Superchips.
- Perlmutter — hosted at the National Energy Research Scientific Computing Center (NERSC) and powered by NVIDIA accelerated computing.
- JUPITER — Europe’s first exascale supercomputer, hosted at the Jülich Supercomputing Centre (JSC) and powered by the NVIDIA Grace Hopper platform and Quantum-X800 InfiniBand networking.
A rendering of JUPITER supercomputer racks featuring the NVIDIA Grace Hopper platform. Video courtesy of Forschungszentrum Jülich / Sascha Kreklau.
“At CSCS, we don’t just support open science — we accelerate it,” said Thomas Schulthess, director of CSCS. “The extraordinary breakthroughs by this year’s five Gordon Bell finalists in climate modeling, materials science, fluid dynamics and digital twins stand as irrefutable proof: without the Alps supercomputer, these scientific discoveries simply would not exist. Pushing computational boundaries turns bold targets into reality, delivering scientific revolutions that will redefine our world.”
Learn more about the five finalists’ projects below.
ICON: Modeling Earth at Kilometer-Scale
A novel configuration for the ICON Earth system model — developed by researchers at the Max Planck Institute for Meteorology, German Climate Computing Centre (DKRZ), CSCS, JSC, ETH Zurich and NVIDIA — is poised to enable more accurate weather forecasts and a deeper understanding of how the planet works.
By modeling the entire Earth’s systems at kilometer-scale resolution, ICON can capture the flow of energy, water and carbon through the atmosphere, oceans and land with exceptional detail and unprecedented temporal compression — allowing about 146 days to be simulated every 24 hours — which enables more efficient climate simulations projecting up to decades forward.
A simulation of carbon dioxide flux using the ICON model.
“Integrating all essential components of the Earth system in the ICON model at an unprecedented resolution of 1 kilometer allows researchers to see full global Earth system information on local scales and learn more about the implications of future warming for both people and ecosystems,” said Daniel Klocke, computational infrastructure and model development group leader at Max Planck Institute for Meteorology.
ORBIT-2: Exascale Vision Foundation Models for Weather and Climate Modeling
Developed as part of a collaboration between Oak Ridge National Laboratory, NVIDIA and others — and running on the Alps supercomputer — ORBIT-2 is an AI foundation model for weather and climate downscaling that demonstrates unparalleled scalability and precision.
Tapping into exascale computing and algorithmic innovation, ORBIT-2 overcomes challenges faced by traditional climate models with spatial hyper-resolution downscaling, a technique that creates high-resolution data from lower-resolution sources. This enables teams to capture and predict far more localized phenomena like urban heat islands, extreme precipitation events and subtle shifts in monsoon patterns.
“NVIDIA’s advanced supercomputing technologies enabled ORBIT-2 to achieve exceptional scalability, reliability and impact at the intersection of AI and high-performance computing on NVIDIA platforms,” said Prasanna Balaprakash, director of AI programs and section head for data and AI systems at Oak Ridge National Laboratory.
QuaTrEx: Advancing Transistor Design Through Nanoscale Device Modeling
A team from ETH Zurich has advanced nanoscale electronic device modeling with QuaTrEx, a package of algorithms that can boost the design of next-generation transistors.
Running on the Alps supercomputer with NVIDIA GH200 Superchips, QuaTrEx can simulate devices with more than 45,000 atoms with FP64 performance and extreme parallel-computing efficiency. This enables faster, more accurate design of transistors, called NREFTs, that will be crucial for the semiconductor industry.
A simulation of the flow of electrons in a nanoribbon transistor. Video courtesy of ETH Zurich.
“Access to Alps was instrumental in the development of QuaTrEx,” said Mathieu Luisier, full professor of computational nanoelectronics at ETH Zurich. “It allowed us to simulate devices that we could not imagine handling just a few months ago.”
Simulating Spacecraft at Record-Breaking Scales With the MFC Flow Solver
Designing spacecrafts, especially those with many small engines, requires detailed simulation, as engines packed closely together can cause their exhaust to interact and heat up a rocket’s base.
Running on the Alps supercomputer, MFC, an open-source solver developed by the Georgia Institute of Technology in collaboration with NVIDIA and others, enables fluid flow simulation 4x faster and with over 5x greater energy efficiency while maintaining the same accuracy as the previous world record. Based on full-scale runs on Alps, MFC is expected to run at 10x the scale of the previous world record on JUPITER. This paves the way for faster, more accurate design of critical components for space exploration.
A rocket engine simulation using computational fluid dynamics. Video courtesy of the Georgia Institute of Technology.
“Our new information geometric regularization method, combined with the NVIDIA GH200 Superchip’s unified virtual memory and mixed-precision capabilities, has drastically improved the efficiency of simulating complex computational fluid flows, enabling us to simulate rocket engine plumes at unprecedented scales,” said Spencer Bryngelson, assistant professor in computational science and engineering at the Georgia Institute of Technology.
A Digital Twin for Tsunami Early Warning
The University of Texas at Austin, Lawrence Livermore National Laboratory and the University of California San Diego have created the world’s first digital twin that can issue real-time probabilistic tsunami forecasts based on a full-physics model.
Applied to the Cascadia subduction zone in the Pacific Northwest, the digital twin accomplished complex computations that would normally take 50 years on 512 GPUs in just 0.2 seconds on the Alps and Perlmutter supercomputers, representing a 10 billion-fold speedup.
“For the first time, real-time sensor data can be rapidly combined with full-physics modeling and uncertainty quantification to give people a chance to act before disaster strikes,” said Omar Ghattas, professor of mechanical engineering at UT Austin. “This framework provides a basis for predictive, physics-based emergency-response systems across various hazards.”
For the tsunami digital twin, ICON and MFC projects, NVIDIA CUDA-X libraries played a key role in maximizing the performance and efficiency of the complex simulations. ICON also taps into NVIDIA CUDA Graphs, which allow work to be defined as graphs rather than single operations.
Learn more about the latest supercomputing advancements by joining NVIDIA at SC25, running through Thursday, Nov. 20.
