Minisymposia
Advances of Deep Learning in Economics
This minisymposium, “Advances of Deep Learning in Economics,” focuses on the intersection of economic research and computational methods. Esteemed speakers, including Douglas Araujo from the Bank of International Settlements, Jonathan Payne from Princeton University, Aleksandra Friedl from the Ifo Institute, and Adam Zhang from the University of Minnesota, will share their ground-breaking work. Araujo’s presentation, “Benchmarking economic reasoning in artificial intelligence models,” leverages insights from the large language model benchmarking literature and the social economics literature to inform the design of benchmarking tests. Payne’s talk, “Deep Learning Solutions to Master Equations for Continuous Time Heterogeneous Agent Macroeconomic Models,” applies deep learning to understand the complexities of continuous time models featuring rich heterogeneity. Friedl will discuss “Green energy transition: decarbonisation of developing countries and the role of technological spillovers,” highlighting deep learning’s efficacy in solving high-dimensional climate economics models. Lastly, Zhang’s “Before and After Target Date Investing: The General Equilibrium Implications of Retirement Saving Dynamics” explores financial innovation’s equilibrium effects using a unique machine learning approach. This symposium exemplifies the profound impact of computational methods, particularly deep learning, on advancing economic modeling and analysis, promising new insights in econometrics, macroeconomics, and finance.
Organizer(s): Aleksandra Friedl (ifo Institut), Simon Scheidegger (University of Lausanne), and Yucheng Yang (University of Zurich)
Domain: Applied Social Sciences and Humanities
AI in Life Sciences and Healthcare: Recent Advances, Challenges, and the Path Forward
Advances in Artificial Intelligence (AI) are reshaping life sciences and healthcare research. Our minisymposium explores AI’s transformative role, delving into Computational Science, Computer Vision, and High-Performance Computing (HPC). Sessions span cancer research, personalized medicine, interactive machine learning, and privacy-preserving federated learning offering a panoramic view of AI’s applications. In cancer research, Dr. Ellingson discusses the impact of developing models that integrate biological factors and social determinants of health using HPC. Precision imaging, spotlighted by Dr. Gichoya, leverages vast reservoirs of images to advance precision medicine while taking into account the serious ethical concerns. The minisymposium addresses the AI’s black box challenge, crucial for trust and accountability. Dr. Jaeger shares human-centered approaches for recent developments in probabilistic modeling and explainable AI in cancer care. Challenges like data availability, scalability, and ethical AI practices are focal points. Dr. Madduri discusses federated learning’s potential to build trustworthy AI in life sciences across institutions. The minisymposium aims to shape ethical standards, foster collaboration, and address the multifaceted challenges of guiding ethical AI integration in life sciences and healthcare. We aim to influence best practices and guidelines, propelling the synergy between AI and life sciences for the benefit of all of us.
Organizer(s): Destinee Morrow (Lawrence Berkeley National Laboratory), Rafael Zamora-Resendiz (Lawrence Berkeley National Laboratory), and Silvia Crivelli (Lawrence Berkeley National Laboratory)
Domain: Life Sciences
Application Perspective on SYCL, a Modern Programming Model for Performance and Portability
HPC and data-intensive computing now stand as the fourth pillar of science. The integration of machine learning, AI, and HPC techniques, combined with the availability of massive amounts of compute resources, holds the promise of revolutionizing scientific computing. However, while fast moving technologies promise many exciting possibilities, they also bring drawbacks and risks. One such risk is proprietary tools, walled gardens of vendor libraries and APIs. Scientific communities require standards-based, portable, and interoperable tools to program computing systems—from edge to HPC data center to cloud—to achieve the goal of efficiently combining physics-based simulations with novel machine learning and AI based methods. SYCL promises to be such a tool, offering interoperability and avoiding vendor lock-in, highlighted by it being the cornerstone technology of the SYCLOPS project for development of an open ecosystem for AI acceleration. SYCL is a high-level, vendor-agnostic standard for heterogeneous computing with several mature implementations targeting a wide range of hardware accelerators. It has been adopted by several large HPC projects as their performance-portability layer. This minisymposium aims to discuss SYCL primarily from the perspective of scientific application developers, sharing the experiences and promoting interdisciplinary communication in software engineering for modern, performance-portable, and maintainable code.
Organizer(s): Andrey Alekseenko (KTH Royal Institute of Technology, SciLifeLab), and Szilárd Páll (KTH Royal Institute of Technology)
Domain: Computational Methods and Applied Mathematics
Architectures for Hybrid Next-Generation Weather and Climate Models
Climate and weather models, traditionally built using low-level languages like Fortran for performance, face sustainability challenges due to evolving hardware architectures and advances in machine learning (ML). ML models are rapidly approaching the effectiveness of physics-based models, suggesting a future shift towards hybrid systems that blend classic numerical methods with ML. This evolution necessitates exploring new tools and methodologies to address performance portability issues and the integration of physics-based models with high-performance GPU and ML frameworks in Python. Significant progress has been made with domain specific languages or general-purpose software libraries, but integrating these with traditional model components remains a challenge. The next-generation weather and climate models must accommodate a range of tools, including numerical methods, performance-portable frameworks, auto differentiation toolkits, and ML libraries. However, the architectural complexities of integrating these diverse tools are often overlooked in scientific software development. The minisymposium will focus on architectural design for scalable weather and climate models, addressing key topics like automatic differentiation, optimization, integration of various model components, and efficient data handling for ML-enabled simulations.
Organizer(s): Hannes Vogt (ETH Zurich / CSCS), Enrique González Paredes (ETH Zurich / CSCS), and Mauro Bianco (ETH Zurich / CSCS, ETH Zurich)
Domain: Climate, Weather and Earth Sciences
Breaking the HPC Silos for Outstanding Results
The minisymposium will be in two parts: first there will be three scientific presentations from HPC experts from different backgrounds. These talks will offer a view on a variety of carrier paths, focusing on how inclusivity and diversity played a role in achieving the presented results. The first talk will take us on a tour of tools and best practices to enhance portability and performance of HPC codes. Then we will hear about how digital twins can be positioned as bridges between centers offering top end capacities and clouds in order to facilitate merging AI and HPC, mainly for environmental science projects. During the third talk, the recipient of the 2023 PRACE Ada Lovelace Award for HPC will share her experience in pushing performance of applications to reach new horizons. Finally, we will hear from IDEAS4HPC about the activity plan for 2024-2026: mentorship, support for students or young scientists from under represented groups to join HPC programs in Switzerland. A round table with the speakers and the public will conclude the minisymposium.
Organizer(s): Marie-Christine Sawley (IDEAS4HPC, ICES Foundation), Maria Girone (CERN), Florina Ciorba (University of Basel), and Maria Grazia Giuffreda (ETH Zurich / CSCS)
Domain: Applied Social Sciences and Humanities
Bridging the Gap: Addressing Software Engineering Challenges for High Resolution Weather and Climate Simulations
HPC is driving scientific progress in fields as diverse as weather forecasting, life sciences and physics simulations. However, adapting to the evolving hardware diversity is a major challenge for the climate, weather, and geoscience community, as their codes are often very large and have evolved over decades. This minisymposium focuses on the challenges faced by weather and climate model developers: 1. Optimising performance and parallel programming on diverse supercomputers, despite programming standards that often require manual coding for system transitions. 2. Exploring new tools and languages for better productivity and performance on different hardware, balancing optimisations without compromising performance. 3. Improve code modularity and software practices for large scientific bases, adapting to evolving hardware and scientific needs. 4. Bridging the gap between domain scientists and research software engineers, balancing coding expertise with performance optimisation. The minisymposium is twinned with the session “High resolution simulations on large HPC systems”, which takes a closer look at recent results gained on (pre-)exascale systems for different weather and climate models.
Organizer(s): Xavier Lapillonne (MeteoSwiss), and Hendryk Bockelmann (DKRZ)
Domain: Climate, Weather and Earth Sciences
Complex Autonomous Workflows that Interconnect AI/ML, Modeling and Simulation, and Experimental Instruments
Recent advancements in edge computing, automation, and artificial intelligence (AI) have spurred the development of “smart” autonomous laboratories and facilities that integrate complex workflows across experimental instruments and distributed heterogeneous computational resources. The primary objective of our minisymposium is to showcase recent advances aimed at establishing research ecosystem capabilities that seamlessly integrate real-time experiment and computational workflows across edge devices, cloud computing, and high-performance computing (HPC) resources. While edge computing enables laboratories to rapidly process data at its source to reduce latency and improve real-time decision-making, AI/ML and HPC simulations are fundamental components that allow autonomous laboratories to rapidly learn, adapt, and optimize workflow processes. Our minisymposium presentations offer a distinct perspective that highlight common smart autonomous science workflows that transcend disciplinary boundaries to seamlessly foster cross-disciplinary collaboration and knowledge exchange. The full integration of AI/ML, modeling and simulation, and experimental instruments is an impossible task for a single group. Hence, our minisymposium is intended to both inform attendees of recent complex workflow efforts and build new collaborations to tackle future challenges with building an interoperable ecosystem for complex workflows that enable autonomous laboratories and facilities.
Organizer(s): Benjamin Mintz (Oak Ridge National Laboratory), Rafael Ferreira da Silva (Oak Ridge National Laboratory), and Rob Moore (Oak Ridge National Laboratory)
Domain: Computational Methods and Applied Mathematics
Composable Julia Software in Atomistic Materials Modeling
Large-scale first-principles simulations play an ever-increasing role in the development of materials and occupy a noteworthy share of available supercomputing resources. In recent years, workflows in the field have become increasingly heterogeneous and couple a range of physical models to balance trade-offs between accuracy and computational cost. Similarly, data-driven approaches are well-established to replace the expensive parts of the modeling procedure, a prime example being machine-learned interatomic potentials for molecular dynamics. This induces necessary collaboration between experts in modeling the various physical scales as well as related disciplines in mathematics and computer science. In this minisymposium, we focus on the opportunities of the Julia programming language to foster such interdisciplinary collaborations in the context of atomistic simulations. Our speakers report recent successes where Julia’s ability to seamlessly compose packages from different communities has helped to overcome disciplinary barriers and to bring novel ideas from differentiable programming, uncertainty quantification or mathematical analysis into the context of atomistic modelling. Their examples illustrate how Julia enables synergies by supporting — in a single software stack — research thrusts all the way from mathematical analysis to full-scale active learning loops for state-of-the-art materials simulations.
Organizer(s): Michael F. Herbst (EPFL), and Rachel Kurchin (Carnegie Mellon University)
Domain: Chemistry and Materials
Confidential Computing in HPC
HPC continues to be commoditised and democratised, with HPC-as-a-service, workflow-based HPC, and a growing number of HPC and AI use cases, whilst the largest scale computing resources are simultaneously concentrated into fewer sites capable of meeting the vast power, infrastructure, and financial requirements of exascale systems. As such, HPC service operators are required to cater to an ever-widening variety of users with diverse workloads and potentially sensitive data, and so the ability to protect and isolate confidential workloads in multi-tenant HPC environments is becoming increasingly important. There exist many different HPC or near-HPC workloads which currently cannot run on public or federated cloud environments for various reasons. While the concerns often originate from legal aspects, such as regulatory requirements, protection of intellectual property (algorithms or data) or protection of personal data, the solution must be provided on the platform level. In this minisymposium, we intend to capture the current state of Confidential Computing in HPC, ranging from direct application and workflows to deployment and low-level implementation. We illustrate the impact of mitigation techniques on HPC architectures. We also explore possible advancements and alternatives to silicon-based trust regions, by looking at more fundamental mathematical topics, such as Homomorphic Encryption.
Organizer(s): Tiziano Müller (HPE), Tim Dykes (HPE), and Alfio Lazzaro (HPE)
Domain: Computational Methods and Applied Mathematics
Data-Driven Modeling of Aerosols and Clouds for Climate Predictions
Aerosols and clouds have large impacts on climate, with macroscale climate impacts depending on the size and composition of individual aerosol particles and cloud droplets. Large uncertainties exist in simulating aerosols and clouds and in assessing their climate impacts on the global scale. The objective of this minisymposium is to bring together researchers focusing on data-driven methods for the development of cloud and aerosol model parameterizations. These methods have the potential to reduce structural and parametric uncertainty and to improve the consistency of the representation of aerosol and cloud processes across spatial and temporal scales. Our speakers will present on machine learning for Earth system prediction and predictability, reduced mechanisms of atmospheric chemistry, Bayesian methods to estimate aerosol process rates, and the learning of highly-efficient reduced order models of cloud microphysics. While our minisymposium will focus on the area of climate, the methods have the potential to be applicable in other areas that have similar multi-scale structures.
Organizer(s): Nicole Riemer (University of Illinois Urbana-Champaign), Lekha Patel (Sandia National Laboratories), and Matthew West (University of Illinois Urbana-Champaign)
Domain: Climate, Weather and Earth Sciences
Energy Efficiency and Carbon Footprint of Earth System Modeling in Times of Spiraling Electricity Prices and Net-Zero Goals
Rising energy costs and the endeavor to reach net-zero have recently brought energy-efficient high-performance computing back to the forefront. In no application area is this topic more pressing than in Earth system modeling (ESM), which comes at a huge computational expense and where there is a strong moral prerogative to minimize its negative impact. Atmospheric models progress towards cloud-resolving scales requiring Exascale computing while electricity becomes more scarce. Computing centers are being challenged by rising energy budgets while transitioning to dramatically more powerful high-performance computing platforms. Are the ESM and HPC communities doing enough to address these challenges? We have invited four speakers to talk about a wide range of energy and climate footprint perspectives, ranging from the tools needed for energy consumption analysis, to the concrete carbon footprint analysis of high-resolution global simulations on a state-of-the-art computing platform, and finally to the societal obligation we face in our community to lead as role models. While we can highlight our best efforts towards energy efficiency and carbon prudence, we will leave it to the audience to decide if we are living up to our obligation to the planet while still promoting this important research area.
Organizer(s): William Sawyer (ETH Zurich / CSCS), and Jan Frederik Engels (German Climate Computing Centre)
Domain: Climate, Weather and Earth Sciences
Ethical and Societal Considerations for Scientific Computing
While significant investments have been made in the exploration of ethics in computation, recent advances in high performance computing (HPC) and artificial intelligence (AI) have reignited a discussion for more responsible and ethical computing with respect to the design and development of pervasive sociotechnical systems within the context of existing and evolving societal norms and cultures. The ubiquity of HPC in everyday life presents complex sociotechnical challenges for all who seek to practice responsible computing and ethical technological innovation. We wish to discuss the ways we can incorporate ethics into all phases of scientific computing developments and deployments to ensure that the desired scientific outcome is achieved fully in a context that ethically considers humans and society rather than just the technical requirements. We will share experiences from those who have incorporated ethics into what they do to demonstrate that ethics and technical achievement are not at odds. We also will include perspectives on ethics to promote a lively discussion to seek balance in how we pursue scientific progress. The panel discussion in this session will address lessons learned and facilitate audience interaction aimed at enabling informed decision-making regarding ethics and responsible computing.
Organizer(s): Jay Lofstead (Sandia National Laboratories)
Domain: Applied Social Sciences and Humanities
European Perspective on Converged HPC and Cloud Hardware & Software Architectures
Today, cloud computing technologies have gained prevalence for their benefits in resource dynamism, automation, reproducibility, and resilience. HPC offers access to advanced computing techniques and massive processing capability for grand challenges and scientific discovery. Meanwhile, the computing landscape is changing rapidly towards complex workloads and workflows that combine simulations with data analytics and machine learning. These workloads aim to apply large-scale and distributed computing to domains with high societal impact, such as autonomous vehicles and smart cities. Under the Horizon Europe call 2022 Open source for cloud-based services several projects have gathered to tackle different aspects of this challenge to integrated both Cloud and HPC. In this Minisymposium we are discussing first results as well as the possible impact of some of those projects and their proposed architectures on the HPC landscape.
Organizer(s): Tiziano Müller (HPE), and Alfio Lazzaro (HPE)
Domain: Computational Methods and Applied Mathematics
Exploring the Structure-Property Relation in Soft Matter with Computational Tools: Hierarchical Structures and Multiscale Dynamics
Understanding the intricate interplay between a material’s structure and its macroscopic properties is essential in materials science, particularly for soft matter systems like polymers, composites, and colloidal systems. The complexity arises from the relevant length and timescales spanning nanometers to meters and picoseconds to years. Properties such as mechanical strength, thermal conductivity, and responsiveness to external stimuli are closely linked to molecular and macromolecular structures, introducing challenges in predicting overall behavior. Scattering experiments from large-scale neutron or X-ray facilities, coupled with benchtop techniques like microscopy, rheology, and spectroscopy, offer insights into materials’ structures. Yet, data analysis and interpretation of experiments and simulation results often requires computational assistance. In this minisymposium, diverse researchers showcase computational tools, such as molecular dynamics simulations, numerical computations based on physical theories, and deep/machine learning techniques for investigating soft matter. These tools bridge experimental observations and theoretical predictions, facilitating the exploration of both structure and dynamics in soft matter systems. Serving as a bridge between experimental and theoretical realms, these computational tools contribute to a multidisciplinary effort, enhancing our understanding of fundamental material aspects and opening avenues for innovative applications across diverse industries.
Organizer(s): Jan Michael Carrillo (Oak Ridge National Laboratory), Yangyang Wang (Oak Ridge National Laboratory), Wei-Ren Chen (Oak Ridge National Laboratory), and Jihong Ma (The University of Vermont)
Domain: Chemistry and Materials
Foundation Models in Earth System Science
Foundation models represent a new class of artificial intelligence models designed to encapsulate information from large amounts of data. The most prominent example are large language models, which have achieved significant breakthroughs in various applications, including natural language processing (NLP), text generation, and machine translation. Some prominent examples of foundation models include GPT models, BERT, PalmX and their derivatives. This workshop will explore the untapped potential of foundation models in Earth system science, bringing together three different perspectives. First, we will introduce AtmoRep, a foundation model for the atmosphere released by a multidisciplinary collaboration between Magdeburg University, CERN and JSC. After that, we will move the discussion to the industry perspective, with Dr. Johannes Jakubik from IBM. In particular, Dr. Jakubik will present the Prithvi model for Earth Observations data, developed in collaboration with NASA. After that, Troy Arcomano from Argonne National Laboratory will present two different models: ClimaX, released in collaboration with Microsoft, and Stromer. The last part of the workshop will be an interactive discussion with the audience on the role of foundation models in fundamental science. It will be the occasion for the community to brainstorm around the future and the potential of these new technologies.
Organizer(s): Ilaria Luise (CERN), and Christian Lessig (ECMWF, Otto-von-Guericke-Universitat Magdeburg)
Domain: Climate, Weather and Earth Sciences
GPU Acceleration in Earth System Modeling: Strategies, Automated Refactoring Tools, Benefits and Challenges
Earth System models simulate the complex interactions of the atmosphere, oceans, land and sea ice, providing valuable insights into short-term weather forecasts and long-term climate research, which are important for understanding and mitigating the impacts of weather-related disasters and climate change. As the complexity and computational demands of these models increase, the need for Graphic Processing Unit (GPU) acceleration becomes increasingly apparent. GPUs are computational architectures that efficiently support massive parallelism. Whilst several studies have shown promising computational performance by porting GPUs to Earth System models and thus enabling higher-resolution simulations, some of them have also discussed the challenges of adapting existing codes to run on GPUs. To address refactoring and portability issues, automating code refactoring tools have been developed to increase the efficiency of porting code to GPU and improve portability and maintainability. This minisymposium aims to bring together scientists, computational researchers, and model developers to explore the role of GPU acceleration in optimizing Earth System models, share the experience, and look to the future. Topics also include optimization strategies (e.g., parallelization techniques, memory management, data transfer, etc.), automating code refactoring tools (e.g., PSyclone), benefits and challenges (e.g., speedup, memory constraints, code management, etc.).
Organizer(s): Wei Zhang (Oak Ridge National Laboratory), Christopher Maynard (Met Office), and Iva Kavcic (Met Office)
Domain: Climate, Weather and Earth Sciences
High Performance Computing for Magnetic Fusion Applications
This series of three minisymposia will be dedicated to addressing frontier challenges in magnetic fusion research. (1) Machine Learning and Quantum Computing: the four speakers will cover various aspects of machine learning, from real-time control of tokamaks to turbulence simulations to HPC issues. One talk will be devoted to the topic of quantum computing and examine opportunities for application in the field of fusion plasma physics. (2) New developments for Edge and Scrape-Off Layer (SOL) simulations: this is recognized as a frontier domain, involving significant challenges at various levels. Three talks will be devoted to progress made on three different kinetic codes, while a generalization of gyrokinetic models to magnetized sheath conditions will be presented in a fourth talk. (3) Beyond gyrokinetic models: standard gyrokinetic theories have their limitations which prevent them to be applied as is to various situations, in particular in presence of steep gradients as found in the outer plasma region. Advanced kinetic simulations beyond the standard gyrokinetic approach used in magnetic fusion will be presented. The relation between (fully-)kinetic, gyrokinetic, drift-kinetic and the MHD limit of these will be discussed. In all three sessions, the latest HPC applications in the field will be emphasized.
Organizer(s): Eric Sonnendrücker (Max Planck Institute for Plasma Physics, Technical University of Munich), Laurent Villard (EPFL), and Stephan Brunner (EPFL)
Domain: Physics
High Performance Computing meets Quantum Computing
Quantum Computing (QC) exploits quantum physical phenomena, like superposition and entanglement. A mature quantum computer has the potential to solve some exceedingly difficult problems with moderate input sizes efficiently. Still, much work lies ahead before quantum computing can compete with current HPC technologies, or even successfully integrate and complement them. From the solely software point of view, several promising algorithms for quantum systems have been developed over the past decades. These algorithms have been limited to a specific set of problem types and require the users to transform their problem into a format that can be solved using these quantum algorithms. In general, it emerges a paradigm where quantum computers will not replace traditional supercomputers. Instead, they will become an integral part of supercomputing solutions, acting as an “accelerator”, i.e. specialised to speed-up some parts of the application execution. In this respect, this hybrid HPC-QC approach is where real-world applications will find their quantum advantage. The goal of the minisymposium is to gather researchers and developers to discuss their experiences with applications development with QC algorithms, specifically related to the integration of applications currently running on “classical” HPC systems that aims to use QC devices as an accelerator.
Organizer(s): Alfio Lazzaro (HPE), and Tiziano Mueller (HPE)
Domain: Computational Methods and Applied Mathematics
High Performance Graph Analytics
Estimating the structure, partitioning, and analyzing graphs, are all critical tasks in a plethora of applications. Problems in domains such as image processing, social network analysis, and classification via neural networks, are often formulated as being graph-based. Simultaneously, graph analytic methods are traditionally an important subtask that enables the parallelization or the complexity reduction of the entire algorithmic workflow. This minisymposium samples recent advances in methods intended for graphs emerging from large-scale data, with a focus on performant, efficient, and scalable algorithms.
Organizer(s): Dimosthenis Pasadakis (Università della Svizzera italiana), and Olaf Schenk (Università della Svizzera italiana)
Domain: Computational Methods and Applied Mathematics
HPC Code Development for Multi-Scale Multiphysics Simulations for Fusion Energy Design
Fusion energy is a grand challenge that can contribute to reducing greenhouse gas emissions and its negative effects on climate change. The successful deployment of fusion energy devices will depend on the robust engineering design of every component. To fully understand the complexities of fusion energy systems, a multidisciplinary approach is needed. Regardless of the form it takes, a fusion energy device encompasses a wide array of length and time scales that will require novel computational techniques to bridge the gaps existing between the multiple scales and multiple physics. One avenue for addressing the fusion energy challenge is with high-fidelity simulations that are used to inform machine learning and AI algorithms to produce reduced-order models. These computational models accurately capture the physics while providing fast-running macro-scale, or engineering-scale, simulations for rigorous design optimizations. In this minisymposium, experts in modeling and simulation are brought together for fusion energy applications to lay out what is needed to build these simulations. The focus will be on bridging the gap that exists between high-fidelity modeling and simulation, and engineering models.
Organizer(s): Franklin G. Curtis (Oak Ridge National Laboratory), and Stuart Slattery (Oak Ridge National Laboratory)
Domain: Physics
Improving the Sustainability of Research and Scientific Software
In the rapidly evolving field of computational science, sustainability of research and scientific software has become a crucial element for future progress. The robustness and long-term viability of scientific computing software ecosystems is essential for the success of many scientific and technical organizations. A critical aspect of promoting sustainability is the evaluation and monitoring of project health within these ecosystems. This minisymposium will explore sustainability metrics’ requirements and development, with speakers from the open-source software community, industry, and national laboratories sharing insights on theoretical models and practical experiences in software management. This event will also include a panel discussion, serving as a collaborative platform for exchanging best practices in software sustainability. It will address the challenges in software maintenance, the importance of effective stewardship strategies, and the crucial role of community involvement in ensuring the enduring success of scientific software. This event is more than a discussion—it’s an active engagement in shaping the future of scientific software sustainability. It offers a platform for collaboration among various stakeholders to ensure the resilience and vitality of scientific software, crucial for progress in computational science and technology.
Organizer(s): Gregory R. Watson (Oak Ridge National Laboratory), Elaine M. Raybourn (Sandia National Laboratories), Addi Malviya-Thakur (Oak Ridge National Laboratory), and Daniel S. Katz (University of Illinois Urbana-Champaign)
Domain: Computational Methods and Applied Mathematics
In Situ Coupling of Simulations and AI/ML for HPC: Software, Methodologies, and Applications
Motivated by the remarkable success of artificial intelligence (AI) and machine learning (ML) in the fields of computer vision and natural language processing, over the last decade there has been a host of successful applications of AI/ML to a variety of scientific domains. In most cases, the models are trained using the traditional offline (or post hoc) approach, wherein the training data is produced, assembled, and curated separately before training is deployed. While more straightforward, the offline training workflow can impose some important restrictions to the adoption of ML models for scientific applications. To solve these limitations, in situ (or online) ML approaches, wherein ML tasks are performed concurrently to the ongoing simulation, have recently emerged as an attractive new paradigm. In this minisymposium, we explore novel approaches to enable the coupling of state-of-the-art simulation codes with different AI/ML techniques. We discuss the open-source software libraries that are being developed to solve the software engineering challenges of in situ ML workflows, as well as the methodologies adopted to scale on modern HPC systems and their applications to solve complex problems in different computational science domains.
Organizer(s): Riccardo Balin (Argonne National Laboratory), Ramesh Balakrishnan (Argonne National Laboratory), Andrew Shao (HPE), and Alessandro Rigazzi (HPE)
Domain: Engineering
Innovations Unleashed: The Future of Scientific Research with Cloud Labs and Self-Driving Labs
The integration of artificial intelligence (AI), machine learning (ML), and automated instrumentation is transforming research through Cloud Labs and Self-Driving Labs (SDLs). Positioned strategically, these labs enhance accessibility and integrate computing and data analysis, enabling researchers to leverage cutting-edge technology globally. A national network of Cloud Labs and SDLs is envisioned to promote synergy in AI, ML, computing, and data analysis. The minisymposium prioritizes efficiency, reproducibility, inclusivity, collaboration, and cost-effectiveness, fostering a robust ecosystem. Addressing challenges, it targets reproducibility in Cloud Labs and SDLs, emphasizing parallels between computing and experimental workflows. Pursuing replicable results in virtual environments and automated SDL procedures builds trust in scientific validity. Standardized practices, rigorous documentation, and transparent methodologies, whether in digital Cloud Labs or physical SDLs, are crucial for advancing scientific inquiry and bolstering trust in research outcomes. The minisymposium aims to unite experts, researchers, and industry professionals to showcase innovations, discuss challenges, explore funding opportunities, and share success stories in integrating computing and data analysis within research paradigms.
Organizer(s): Michela Taufer (University of Tennessee), and Ewa Deelman (University of Southern California, ISI)
Domain: Chemistry and Materials
Interfacing Machine Learning with Physics-Based Models
Many fields of science make use of large numerical models. Advances in artificial intelligence (AI) and machine learning (ML) have opened many new approaches, with modellers increasingly seeking to enhance simulations by combining traditional approaches with ML/AI to form hybrid models. Examples of such approaches include ML emulation of computationally intensive processes and data-driven parameterisations of sub-grid processes. Successfully blending these approaches presents several challenges requiring expertise from multiple areas: AI, domain science, and numerical modelling through to research software and high performance computing. Whilst hybrid modelling has recently become an extremely active area in Earth sciences, the approach and challenges are in no way specific to this domain. Progress is also underway in materials, fluid mechanics and engineering, plasma physics, and chemistry amongst other fields. This interdisciplinary session on hybrid modelling aims to allow scientific modellers to share techniques and breakthroughs in a cross-domain forum. We will hear from both academia and industry about the tools being developed and techniques being used to push forward on a range of fronts across multiple fields. This will be followed by a discussion session in which attendees are invited to share their own challenges and successes with others.
Organizer(s): Jack W. Atkinson (University of Cambridge)
Domain: Climate, Weather and Earth Sciences
Julia for HPC: Tools and Applications
Performance portability and scalability on large-scale heterogeneous hardware represent crucial aspects challenging current scientific software development. Beyond software engineering considerations, workflows making further use of large datasets to constrain physical models are also emerging and are indispensable to develop, e.g., digital twins. GPU computing and differentiable programming constitute leading-edge tools that provide a promising way to combine physics-based simulations with novel machine learning and AI based methods to address interdisciplinary problems in science. The Julia language leverages both tools, as it includes first-class support for various accelerator types and an advanced compiler interface that supports native automatic differentiation capabilities. Julia makes it possible to differentiate efficiently through both CPU and GPU code without significant impact on performance. The goal of this minisymposium is to bring together scientists who work on or show interest in large-scale Julia HPC development, with a particular focus on the necessary tool stack for automatic differentiation and machine learning in the Julia GPU ecosystem, and on applications built on top of it. The selection of speakers, with expertise spanning from computer to domain science, offers a unique opportunity to learn about the latest development of Julia for HPC to drive discoveries in natural sciences.
Organizer(s): Ludovic Räss (University of Lausanne, ETH Zurich), Samuel Omlin (ETH Zurich / CSCS), and Michael Schlottke-Lakemper (RWTH Aachen University)
Domain: Climate, Weather and Earth Sciences
Machine Learning Support for the Lifetime of Software (ML4SW)
Scientific simulations running on High Performance Computing (HPC) systems play a critical role in advancing science and engineering. The HPC community stands to gain significantly by applying cutting edge AI technologies, such as Large Language Models (LLMs), Deep Neural Networks (DNNs), or Transformers, in various aspects of scientific software development and execution. The Machine Learning Support for The Lifetime of Software (ML4SW) minisymposium aims to establish a platform where scientists, developers, and system programmers can come together to exchange ideas and explore how artificial intelligence can help in the effective use of future systems as well as how Scientific Machine Learning can be scaled on HPC systems.
Organizer(s): Florina Ciorba (University of Basel), Harshitha Menon (Lawrence Livermore National Laboratory), and Konstantinos Parasyris (Lawrence Livermore National Laboratory)
Domain: Computational Methods and Applied Mathematics
Modern Fortran: Powering Computational Science and Engineering in High-Performance Computing
Fortran, originating in the 1950s, is a compiled imperative programming language renowned for numeric and scientific computation. Despite seven decades of technological evolution and competition from languages like C++, Python, and Julia, Fortran remains predominant in computational sciences. Its efficiency-focused design and readability cater to scientists and engineers. Fortran’s robust support for mathematical libraries and tools contributes to its enduring popularity, extending its relevance to contemporary challenges in data analysis, AI, and machine learning. The language excels in parallelism and distributed computing, seamlessly integrating with modern supercomputers and clusters. Fortran’s compatibility with GPUs positions it for GPU-accelerated computing, vital in scientific workloads. As of August 2021, it ranks among the top fifteen programming languages. Modern Fortran, encompassing standards from 2003 to 2023, introduces features like object-oriented and parallel programming, enhancing interoperability with C. This minisymposium aims to explore the evolution and relevance of Modern Fortran, with examples showcasing its integration with AI techniques to accelerate time-to-science.
Organizer(s): Filippo Spiga (NVIDIA Inc.), Salvatore Filippone (University of Rome Tor Vergata), and Damian Rouson (Lawrence Berkeley National Laboratory)
Domain: Computational Methods and Applied Mathematics
Modern PDE Discretization Methods and Solvers in a Non-Smooth World
This minisymposium will explore the tension between high-order discretisation methods for PDEs and the fact that many physical phenomena are non-smooth. We will also investigate connections to machine learning, such as the use of reduced precision arithmetics in both domains. High order discretisations in space and time can make optimal use of FLOP-bound exascale hardware and have the potential to unlock additional parallelism. However, it is an open question how these methods can be applied to time-dependent PDEs with elliptic constraints. Off-the-shelf preconditioners are not sufficient and multigrid methods are being developed to solve the resulting large sparse linear systems of equations. Implementing advanced reliable and performance portable PDE based simulation tools requires the combined expertise of specialists from different domains. Real-life codes are starting to use novel discretisation techniques: the UK Met Office explores the solution of the equations of atmospheric fluid dynamics with hybridised finite elements, non-nested multigrid preconditioners and parallel-in-time methods. The ADER-DG ExaHype engine is being extended to include elliptic constraints and support implicit timestepping for astrophysics simulations. A discussion session will explore how the advantages of sophisticated PDE solvers and Machine Learning can be combined productively.
Organizer(s): Eike Mueller (University of Bath), and Tobias Weinzierl (Durham University)
Domain: Computational Methods and Applied Mathematics
Motif-Based Automated Performance Engineering for HPC
We will describe here domain-specific libraries (DSLs) that express mathematical/programming motifs (data objects and operations on those data objects), along with software back-ends that translate the library calls into high-performance code. By the use of a motif-aware software stack, the scientific application code written is much smaller than fully optimized code, with the applications-level code remaining unchanged in moving between platforms, thus leading to a less expensive development process. The four talks being given cover multiple motifs, and different approaches to supporting motif-based DSLs. (1) George Bisbas (ICL) will talk about an approach to structured-grid DSLs based on lowering the abstractions written in Python to the LLVM Multi-Level Intermediate Representation (MLIR). (2) Het Mankad (CMU / ORNL) will talk about Proto / ProtoX, a DSL for the structured-grid motif that targets CPUs and GPUs, based on the Spiral toolchain. (3) Sanil Rao (CMU) will talk about FFTX for supporting FFTs on CPU and GPU systems, based on the Spiral toolchain. (4) Sam Reeve (ORNL) will talk about Cabana, a DSL for supporting grid free particle methods and hybrid particle / mesh methods on GPUs, based on the use of the Kokkos run-time libraries for GPU parallelism.
Organizer(s): Phillip Colella (Lawrence Berkeley National Laboratory; University of California, Berkeley), Franz Franchetti (Carnegie Mellon University), and Brian Van Straalen (Lawrence Berkeley National Laboratory)
Domain: Computational Methods and Applied Mathematics
Next-Generation Computing for the Large Hadron Collider
The Large Hadron Collider (LHC) is the “energy frontier” of high energy physics, but the accelerator itself has advanced fundamental physics only with extensive computing resources and R&D. Data collection from the detectors around LHC’s collision sites require advanced GPU- and FPGA-based triggers to filter the incoming data. Simulating the creation of known particles from hypothetical physics requires GPU-accelerated quantum chromodynamics (QCD). Modeling those particles’ interaction with the detectors to create simulated responses requires massive compute resources that are being gradually transitioned to GPUs. Finally, confirming new physics requires complex analysis frameworks that reconstruct experimental particle tracks and compare them with the simulated results to deduce the fundamental physics creating those particles. Our minisymposium will dive into the advanced computing necessary to enable these new discoveries: a big-picture overview of the LHC, the detector experiments, their cumulative computing requirements; high-level descriptions of the R&D in GPU accelerators and AI/ML, “online” computing for filtering and capturing data from the experiments, “offline” computing for event generation and detector simulation, and “reconstruction” that determines new physics by comparing the experimental and computational results.
Organizer(s): Seth Johnson (Oak Ridge National Laboratory), Philippe Canal (Fermilab), and Katherine E. Roystone (Oak Ridge National Laboratory)
Domain: Physics
Nexus of AI and HPC for Weather, Climate, and Earth System Modelling
Accurately and reliably predicting weather and climate change and associated extreme weather events are critical to plan for disastrous impacts well in advance and to adapt to sea level rise, ecosystem shifts, and food and water security needs. The ever-growing demands of high-resolution weather and climate modeling require exascale systems. Simultaneously, petabytes of weather and climate data are produced from models and observations each year. Artificial Intelligence (AI) offers novel ways to learn predictive models from complex datasets, at scale, that can benefit every step of the workflow in weather and climate modeling: from data assimilation to process emulation to solver acceleration to ensemble prediction. Further, how do we make the best use of AI to build or improve Earth digital twins for a wide range of applications from extreme weather to renewable energy, including at highly localized scales such as cities? The next generation of breakthroughs will require a true nexus of HPC and large-scale AI bringing many challenges and opportunities. This minisymposium will delve into the challenges and opportunities at the nexus of HPC and AI. Presenters will describe scientific and computing challenges and the development of efficient and scalable AI solutions for weather and climate modeling.
Organizer(s): Karthik Kashinath (NVIDIA Inc.), and Peter Dueben (ECMWF)
Domain: Climate, Weather and Earth Sciences
Novel Algorithms and HPC Implementations for Exascale Particle-In-Cell Methods
The primary aim of this minisymposium is to assemble a focused discussion on the challenges, emerging best practices, and innovative algorithms associated with Particle-in-Cell (PIC) methods as they relate to exascale architectures. The minisymposium specifically addresses demanding scientific problems that can be resolved through the application of cutting-edge exascale techniques, primarily rooted in the PIC paradigm. The spotlight of this minisymposium is predominantly on the complexities encountered at the exascale level, deviating from the routine narratives of success stories that have been repeatedly recounted. In this setting, we aim to foster a conducive environment for active dialogue, encouraging participants to delve deeper into unresolved issues and explore the potential of PIC at the exascale level.
Organizer(s): Sriramkrishnan Muralikrishnan (Forschungszentrum Jülich), Andreas Adelmann (Paul Scherrer Institute), and Matthias Frey (University of St Andrews)
Domain: Computational Methods and Applied Mathematics
Open-Source Scientific Software Ecosystem Stewardship: Pathways to Foundations
Broad communities across government, non-profits, industry, and academia are involved in the use and development of scientific software to readily integrate the latest approaches in computing such as artificial intelligence (AI) and machine learning (ML). The open-source foundation model has proved highly successful in the sustainment of research software. For example, the two-decade-old Linux Foundation supports over 200 communities and 3,400 project source code repositories, including the recently announced High Performance Software Foundation (HPSF). Additionally, NumFOCUS, Inc. established in 2012 and focusing on essential research software projects, has 45 sponsored and 51 affiliated projects. In this minisymposium, we explore strategies of stewardship pathways to foundations fostering innovation and providing common services and infrastructure that benefit all communities. Our speakers will address the following questions: Why would one want to join an open-source software foundation? What are potential pathways to joining a foundation? What challenges and opportunities may be encountered? The talks in this session will address lessons learned and facilitate a thought-provoking discussion aimed at enabling informed decision making when choosing pathways to open-source software advancement.
Organizer(s): Elaine M. Raybourn (Sandia National Laboratories), Addi Thakur Malviya (Oak Ridge National Laboratory), Gregory R. Watson (Oak Ridge National Laboratory), and Daniel S. Katz (University of Illinois Urbana-Champaign)
Domain: Computational Methods and Applied Mathematics
Quantum Simulations of Lattice QCD: Long-Term Goal or Near Future?
The rapid advancement of quantum technologies in recent years holds the potential to revolutionize various areas of physics where classical computations are prohibitively expensive. One prominent example is the study of strong nuclear interaction, where getting predictions at high baryon densities and making full use of extensive experimental efforts is especially challenging due to the sign problem in the existing Monte Carlo approach. Despite all the theoretical and computational research conducted so far, no systematic solution has been found for this issue using classical computers. The proposed minisymposium aims to delve into the recent progress in lattice gauge theories from two complementary perspectives: classical and quantum simulations. We intend to discuss the current status of large-scale lattice QCD projects running on leading HPC centers, as well as publicly available cloud-based quantum computing facilities. Additionally, we will touch upon the question of scalability in such computations and discuss the latest experimental developments that make these computational advances possible. Our goal is to assess the potential to chart the phase diagram of strongly interacting matter across a wide range of densities, based on recent progress in quantum industry, experimental research, and theoretical foundations.
Organizer(s): Marina Krstic Marinkovic (ETH Zurich), and Luigi Del Debbio (University of Edinburgh)
Domain: Physics
Reaching More Relevant Time Scales with Molecular Dynamics Simulations
Molecular dynamics (MD) simulations are a tool used frequently in the life sciences to study processes of interest at very high resolution (from atomistic to mesoscopic). Because the experimental characterization of many of the phenomena of interest usually lacks spatial and/or temporal resolution, computer simulations are, in principle, an attractive complement to experiment. One challenge in studying these systems with computational models is that the time scales of the processes of interest are large and heterogeneous, ranging from picoseconds to seconds or longer. This minisymposium will bring together different perspectives on how to address this overarching challenge. Because MD simulations tend to be limited by basic latency issues, rather than by the total amount of computing power available, no universal solutions are currently on the horizon. The session will feature talks from experts on pushing the boundaries by hardware/software co-design, by sacrificing some spatiotemporal resolution (coarse-graining), by modifying directly the computational models to optimize dynamical properties, and by using parallel (swarm-like) sampling methodologies coupled with dedicated data analysis tools.
Organizer(s): Andreas Vitalis (University of Zurich), Julian Widmer (University of Zurich), and Pablo Vargas Rosales (University of Zurich)
Domain: Life Sciences
Riding the Cambrian Explosion in Hardware for Scientific Computing
Scientific computing has relied upon commodity components for many years, and currently most popular are x86 CPUs and Nvidia GPUs. However, there is a Cambrian explosion of new types of hardware for accelerating scientific codes and as such a wealth of other options are becoming available. These include extremely high-core count CPUs (e.g. the CS-2 and GraphCore), highly vectorised and flexible processing elements (e.g. AMD’s AI engines, Google TPUs), Field Programmable Gate Arrays (FPGAs) and a range of technologies built upon RISC-V (e.g. the 1000 core Esperanto accelerator). Furthermore, many of these new architectures are capable of being highly energy efficient and-so potentially provide a route to delivering improved performance at reduced environmental cost. However, a major challenge is around how scientific application developers can leverage these technologies, and whether they actually deliver the benefits that the vendors claim. This minisymposium will bring together experts in developing these novel technologies and leveraging them for HPC application acceleration, with the scientific community. We will explore the potential benefits of these new architectures, which ones optimally suit what application properties, and discuss some of the challenges that must be overcome for them to become mainstream in scientific computing.
Organizer(s): Nick Brown (EPCC)
Domain: Computational Methods and Applied Mathematics
Scalable Machine Learning and Generative AI for Materials Design
The design and discovery of materials with desired functional properties is challenging due to labor-intensive experimental measurements and computationally expensive physics-based models, which preclude a thorough exploration of large chemical spaces characterized by several chemical compositions and atomic configurations per composition. This disconnect has motivated the development of data-driven surrogate models that can overcome experimental and computational bottlenecks to enable an effective exploration of such vast chemical spaces. In this minisymposium, we discuss new generative artificial intelligence (AI) methods to perform materials design. A particular advantage of generative AI approaches is their ability to learn the context and syntax of molecular data described by fundamental principles of physics and chemistry, providing a critical basis for informing the generative design of molecules. In order to ensure generalizability and robustness of the generative model, the generative AI model needs to be trained on a large volume of data that thoroughly samples diverse chemical regions. Due to the large volumes of data that must be processed, efficiently training these models requires leveraging a massive amount of high performance computing (HPC) resources for scalable training. This minisymposium aims to broadly cover HPC aspects for scalable generative AI models across several heterogeneous distributed computational environments.
Organizer(s): John Gounley (Oak Ridge National Laboratory), Massimiliano Lupo Pasini (Oak Ridge National Laboratory), and Ayana Ghosh (Oak Ridge National Laboratory)
Domain: Chemistry and Materials
Scalable Optimal Control and Learning Algorithms in High Performance Computing
Large scale scientific computing is increasingly pervasive in many research fields and industrial applications, such as: biological modeling of coupled soft-tissue systems, computational fluid dynamics and turbulence, and tsunami inundation via shallow water equations. Not only can forward simulation of these problems be prohibitively expensive, but additional computation and storage of gradient or adjoint information further compounds such numerical burden. This matter becomes further complicated when conducting an inverse or optimal control problem, which requires many such forward simulations and can suffer from the curse of dimensionality. Optimization and control problems can alleviate this burden somewhat via reduced-order surrogate models, randomized compression techniques, neural network surrogates designed to imitate dynamics or operators. The goal of this minisymposium is to bring together researchers working on finite- and infinite-dimensional control and simulation to discuss new methodologies for analyzing and solving problems in the extreme computing regime. It focuses on new techniques in model reduction for high-fidelity physics simulations; surrogate modeling techniques based on learning; distributed and multilevel optimization on HPCs; compression and storage with both deterministic and randomized methods; nonsmooth optimization; and adaptive discretizations.
Organizer(s): Robert Baraldi (Sandia National Laboratories), and Harbir Antil (George Mason University)
Domain: Computational Methods and Applied Mathematics
The Science of Scientific Software Development and Use: Investment in Software is Investment in Science
Scientific software development is integral to scientific discovery. Reliable software practices are crucial for ensuring the trustworthiness of scientific results. Given its significance, studying and enhancing scientific software development through scientific methods is highly beneficial. The challenge lies in improving the development and utilization of scientific software encompassing a broad scope of concerns from individual practices to multi-disciplinary team efforts and addressing the need for more effective tools, methods, and infrastructure. The US Department of Energy’s report on The Science of Scientific Software Development and Use highlights three priority research directions and three cross-cutting themes focusing on developer productivity, team dynamics, workforce challenges, and broader scientific collaboration. This minisymposium explores these research directions and themes, drawing on experiences with emerging tools, including generative AI, social and cognitive sciences, and organizational psychology. It emphasizes the importance of considering various aspects of software development, from maintenance to policy development. Underpinning the discussion will be a focus on improving individuals’ and teams’ productivity and work experience and creating pathways for the next generation of scientists to gain experience and join our communities.
Organizer(s): Michael A. Heroux (Sandia National Laboratories, St. John’s University), and Jim Willenbring (Sandia National Laboratories)
Domain: Applied Social Sciences and Humanities
Supercomputing for the Drug Response Prediction Community
The minisymposium will offer an opportunity for experts in scientific computing and life sciences to share knowledge surrounding the challenging task of comparing machine learning models for cancer drug response prediction. The minisymposium, which will be presented by a range of cancer scientists and computer scientists, will provide an overview of cancer drug response prediction, and the computing challenges that are posed by this problem. Two presenters will cover the development of drug response models. These will be drawn from the community of model developers who produce models that are now available for comparison. Two other presenters will cover the usage of drug response models. These will be drawn from the community of stakeholders that use cancer models in broader research initiatives in cancer science and the development of treatments. They will describe how their team uses computational and data products, how they interact with developers, and what the future of drug response prediction may hold. This minisymposium is not simply about cancer prediction, as the collection of models that is emerging is a valuable asset to the machine learning community, and may be used for a range of studies in machine learning systems, performance, accuracy, and other behavior.
Organizer(s): Justin M. Wozniak (Argonne National Laboratory, University of Chicago), and Thomas Brettin (Argonne National Laboratory)
Domain: Life Sciences
Synergizing AI and HPC for Pandemic Preparedness with Genomics and Clinical Risk Assessment
To address emerging virus variants, our strategy integrates next-gen vaccines and personalized disease treatments. In Bioinformatics Sequencing, HPC and AI power vaccine development and infection control. Simultaneously, AI-Driven Clinical Risk Assessment aids healthcare in pandemics. Personalized disease stratification involves AI models for risk assessment and interpretability-guided deep learning in medical applications. Standardizing EHR and Federated Learning ensures data integrity and privacy. In Bioinformatics Sequencing, we tackle challenges through: Drug Discovery for Next-Gen Vaccines: Applying bioinformatics to identify therapeutic candidates from genomic data for infectious diseases. Evolutionary Analysis for Infection Spread: Analyzing viral sequence data to identify important genes, functions, and evolution for minimizing and tracking infection spread. Accelerating Genotype-Phenotype Workflow: Correlating genotype to phenotype for efficient drug discovery in functional genomics. For AI-Driven Clinical Risk Assessment, methods include: AI Models for Disease Progression: Using advanced deep learning to characterize disease subtypes based on unsupervised and supervised learning. Interpretability-Guided Deep Learning: Enhancing comprehension in medical AI by addressing bias, shortcut learning, and susceptibility to attacks. Standardizing EHR and Federated Learning: Ensuring uniform Electronic Health Records (EHRs) usage, standardizing data formats, and addressing privacy concerns through federated learning. This minisymposium brings together experts to accelerate pandemic preparedness with clinico-genomic-data to improve diagnosis.
Organizer(s): John Anderson Garcia Henao (University of Bern, ARTORG Center for Biomedical Engineering Research), Kary Ann del Carmen Ocaña Gautherot (LNCC), and Carlos Jaime Barrios (Industrial University of Santander)
Domain: Life Sciences
Towards km-Scale Weather and Climate Simulations on (pre-)Exascale HPC Systems
Weather and climate is placed as one of the prime application domains for the incipient era of Exascale computing. The unprecedented amount of resources allows for the first time to perform simulations at the km-scale for climate time scales but poses huge challenges for developers of Earth system models, which are often comprised of large monolithic Fortran code bases. It incurs a need for large-scale refactoring, adaptation to new programming models and a transition to a scalable and sustainable development process that allows to continuously adapt to the rapidly developing portfolio of new hardware and software stacks. The goal of this minisymposium is to focus on the full model deployment on such large-scale systems, take stock of what has been achieved until now in the weather and climate community, share success and failure stories when porting and scaling simulations to large fractions of these systems, and invite others to learn from these insights. It will showcase experiences from Destination Earth, EXCLAIM, and the EarthWorks project. The minisymposium is twinned with the session “Bridging the Gap: Addressing Software Engineering Challenges for High Resolution Weather and Climate Simulations”, which takes a closer look at software engineering strategies that power these endeavours.
Organizer(s): Claudia Frauen (DKRZ), and Balthasar Reuter (ECMWF)
Domain: Climate, Weather and Earth Sciences
Unleashing the Power within Data Democratization: Building an Inclusive Community, One Use Case at a Time
The minisymposium emphasizes the pivotal role of data in scientific research and the need for a robust global data infrastructure. Organized by the National Science Data Fabric (NSDF), an international effort for data democratization, it integrates insights from global initiatives presenting cyberinfrastructures for data access, focusing on the navigation of earth science data, fostering FAIR data, and facilitating Nordic research. The minisymposium discusses interdisciplinary approaches to building the global data infrastructure, integrating data delivery across storage, networking, computing, and education to democratize data-driven scientific discovery. The minisymposium is a global forum for scientists to discuss challenges, share successful use cases, and exchange insights on data-driven cyber infrastructures. Dynamic presentations bridge the gap between theory and practice, showcasing impacts on scientific progress. The event’s international nature underscores the global relevance of multiple initiatives, providing a dynamic platform for researchers to share best practices and collectively advance data-driven scientific discovery. Speakers and attendees discuss significant strides in creating an integrated data infrastructure and fostering global collaboration for scientific breakthroughs.
Organizer(s): Michela Taufer (University of Tennessee), Christine Kirkpatrick (San Diego Supercomputer Center), and Valerio Pascucci (University of Utah)
Domain: Chemistry and Materials
Updating Workflows in Virtual Drug Discovery with Current Technologies
Drug discovery is a difficult process that often relies on fortuitous discoveries. The number of such discoveries has stagnated for decades despite numerous technological advances. The role of HPC in the field is undergoing transformations due to the advent of large-scale machine learning models in recent years, which promise to revolutionize parts of the discovery pipeline. Traditionally, computation provides ways to sample the mutual conformational space of ligands and receptors or predicts physicochemical properties of small molecules, to name just two. Our minisymposium zooms in on the following overarching considerations: first, a mix of access to technology, computational resources, and data dictates the ease and feasibility of use and therefore the widespread adoption of modern, AI-based prediction methods. Second, it is the particular challenge of complexes of small molecules and receptors that they pose many specific problems but offer little generalizability. Third, it is difficult to analyze results objectively when the ultimate goal is simply to discover a new binder, which has limited the ability to transfer and abstract knowledge. Following these lead concerns, our minisymposium is meant to foster the exchange of technologies and to fortify the ongoing discourse on objectivity and standardization in computational drug discovery.
Organizer(s): Andreas Vitalis (University of Zurich), Yang Zhang (University of Zurich), and Cassiano Langini (Sibylla Biotech)
Domain: Life Sciences
What the FORTRAN? Lost in Formula Translation
Fortran, the primary programming language underpinning many operational weather and climate codes, was built around the fundamental principle that performance optimisation is left to the compiler. However, with the emergence of GPU accelerators, significant refactoring, often beyond simple addition of pragmas, is needed to achieve good GPU performance on established, operational, vectorised CPU code. This has led to the rise of DSLs and source-to-source methods that often use elements from compiler theory to bridge the CPU-GPU gap, leaving one unspoken question unanswered: Why the FORTRAN does my compiler not do this for me? In this minisymposium we aim to explore this question by looking at ECMWF’s CLOUDSC benchmark – an NWP mini-app designed to assess (and torture) compilers. For this benchmark, many GPU-optimised flavours exist, including Fortran and C-based offload flavours (OpenACC, OpenMP, CUDA, HIP), that provide an established performance baseline on different GPU architectures. Instead of further optimising these with more intrusive code changes, we ask the question “How close to the original vector-style Fortran code can we get without sacrificing performance?” We aim to explore this question with technologists and compiler enthusiasts from across the HPC and academic spectrum.
Organizer(s): Michael Lange (ECMWF), and Balthasar Reuter (ECMWF)
Domain: Climate, Weather and Earth Sciences