Updated 16 December 2025
Harnessing Responsible AI for Science: Taming Open Data
Manish Parashar.
University of Utah, USA.
Inaugural Chief AI Officer, University of Utah, USA.
Executive Director of the Scientific Computing and Imaging (SCI) Institute.
Presidential Professor in the Kalhert School of Computing.
Artificial intelligence (AI) and open data have become essential engines for scientific discovery and innovation. However, realizing this transformative potential requires a transdisciplinary approach that ensures research and development can effectively and responsibly leverage the diversity of data sources. Despite the exponential growth of available digital data sources and the ubiquity of non-trivial computational power for processing this data, realizing data-driven, AI-enabled science workflows remains challenging. In this talk, I will discuss the importance of democratizing AI R&D, including access to open data and advanced cyberinfrastructure. I will introduce the University of Utah’s One-U Responsible AI Initiative, which aims to catalyze an innovation ecosystem at the University of Utah and across the state. I will also present the vision, architecture, and deployment of the National Data Platform project, as part of a broader national cyberinfrastructure, aimed at catalyzing an open and extensible data ecosystem for science.
—
Accelerating GPU-acceleration on Supercomputers in Japan and Application Development Support for Next Generation
Taisuke Boku.
Center for Computational Sciences, University of Tsukuba
Director, Advanced HPC-AI Research and Development Support Center / Professor,
In Japan, National Flagship System (NFS) has been developed based on multi-core or many-core general purpose CPU with tens of thousand of computation node as massively parallel CPU systems. According to these systems, the governmental programs for code development and scientific computation promotion has not been mainly supporting accelerated computing such as GPU computing. On the other hand, GPU-ready systems have been introduced in several supercomputing centers in national universities which are categorized as National Infrastructure Systems (NIS) in HPCI, the supercomputer collaborative use framework in Japan. However, the government (MEXT) finally decided to step into GPU computing even with NFS of next generation after Fugaku, under project name of “Post-Fugaku”, and the basic design of the system has been started by RIKEN R-CCS under the code name of “Fugaku-NEXT”, which is based on Fujitsu’s next generation CPU (MONAKA-X) and NVIDIA GPU. Under such a situation, we need rapid code development toward large scale GPU-ready systems both for NFS and NIS, supported by governmental program. Responding to the governmental call, we launched a new national center named “Advanced HPC-AI Research and Development Support Cente”, or HAIRDESC for short, in Kobe Japan. The governmental support for the center is 4.5 years from Oct. 2025 to Mar. 2030, toward the Fugaku-NEXT operation starting plan. In HAIRDESC, we will construct a standard set of GPU coding with wide variation of coding styles, application fields, multiple levels from novice to expertise, and with multiple vendor’s GPUs (AMD and NVIDIA). HAIRDESC is also supported by three core organizations: Univ. of Tsukuba, Univ. of Tokyo and Inst. of Science Tokyo, where top-level GPU researchers gather and operate the largest scale of GPU supercomputers under MEXT.
In this talk, I will present current plan of Fugaku-NEXT (by courtesy of R-CCS) and progress of GPU system installation in NIS centers, followed by HAIRDESC plan and activities including the advanced GPU research at three core organizations. I also talk about the memory architecture system differences in two CPU-GPU coupling technologies, NVIDIA GH200 and AMD MI300A with performance analysis.
—
Societal Computing: Designing AI-Ready Ecosystems for a More Resilient Future
Ilkay Altintas.
University of California, San Diego, USA.
Chief Data Science Officer, San Diego Supercomputer Center
Founder and Director, Societal Computing Innovation Lab (SCIL)
Division Director, Cyberinfrastructure and Convergence Research (CICORE)
Societal Computing reframes computing as an innovation engine for collective resilience and impact —linking cutting edge science, data, models, AI systems, and communities to solve complex challenges at scale. In this talk, I will outline a vision for building AI-ready data ecosystems that empower researchers, educators, policymakers, and the public to work from a shared digital fabric. Drawing on lessons from wildfire science, resilient agriculture, public health, and education, I will describe how structured collaboration, national cyberinfrastructure, and responsible AI create a new kind of societal operating system. Through examples from the National Data Platform and the Wildfire Science and Technology Commons, I will show how convergence research becomes actionable when we bridge data stewardship, computational workflows, multi-modal AI, and community-centered design. The talk will highlight emerging opportunities for building trustworthy, inclusive, and durable socio-technical systems that enable science and society to learn, adapt, and innovate together.
—
ArtIMis – AI for Mission at LANL
Nathan DeBardeleben.
Los Alamos National Laboratory, USA.
High Perf. Computing Design (HPC-DES)
UltraScale Systems Research Center (Co-Exec Director Technical Operations)
Senior Research Scientist
This talk provides an overview of the ArtIMis (AI for Mission) project at LANL. ArtIMis is an institutional investment, lab leadership supported, AI initiative that brings together over 100 LANL scientists on focused, AI R&D for LANL’s various missions. In this talk, Dr. DeBardeleben will covers goals, accomplishments, and plans of ArtIMis in its second year including multiphysics foundation models, AI for material discovery and fracture, agentic AI uses, AI for therapeutics, and other topics. Grand challenges around AI include how to accelerate scientist workflow through agentic control of simulation workflows and fast surrogate models enabling exploration of parameter spaces for design and discovery.
—
System Software Solutions for FugakuNEXT and Beyond
Kento Sato.
RIKEN Center for Computational Science (R-CCS), Japan.
Team Principal, High Performance Big Data Research Team.
The FugakuNEXT project advances Japan’s high-performance computing infrastructure, aiming to extend the capabilities of the current Fugaku system while exploring new architectures for the AI-for-Science era. This talk introduces ongoing research and development on system software solutions that will underpin FugakuNEXT and beyond.
—
The Digital Wind Tunnel: FABRIC Network Instrument, Edge-to-Core Workflows, and the Future of Decentralized CI Resource Management
Anirban Mandal.
University of North Carolina at Chapel Hill. US.
Director for the Network Research and Infrastructure (NRIG) group at RENCI (Renaissance Computing Institute),
The FABRIC network testbed is an indispensable “research instrument”, functioning as a crucial enabler for experimentation and evaluation of distributed scientific workflow technologies on next-generation cyberinfrastructure. This presentation will focus on Edge-to-Core workflows, which are critical for science domains like disaster response using UAVs, requiring efficient orchestration and management of sensor data across edge devices, the network, and core cloud resources. Research leveraging the FABRIC testbed provides tools for scientists to include edge computing devices in computational workflows, essential for low-latency applications.
The presentation will also delve into a radically alternative, fully decentralized approach to resilient resource management for scientific workloads, inspired by swarm intelligence (SI) and multi-agent systems. This research includes the development of a novel, greedy consensus algorithm for distributed job selection, with implementations utilizing hierarchical topologies deployed and evaluated directly on FABRIC. FABRIC acts as the essential “digital wind tunnel”, providing isolated and reproducible environments necessary to test complex workflow execution and resource management under controlled anomalous conditions that production systems cannot support.
—
Unifying Data Representation in Coupled Simulation-AI Workflows
Ana Gainaru.
Oak Ridge National Laboratory, USA
Computer Scientist (co-lead of the Data Understanding thrust in SciDAC RAPIDS and lead of the Self-improving AI models thrust in The Transformational AI Models Consortium (ModCon))
The next generation of HPC application is represented by hybrid approaches that weave together traditional simulations and modern AI. However, a critical bottleneck in integrating HPC with AI is the “lack of awareness” between workflow components. The outputs of HPC applications are often analyzed only sparingly before archival, effectively becoming inaccessible for future training codes due to the manual, time-consuming processes of finding, and processing datasets for each analysis purpose, frequently outweighing the cost of re-running simulations. This fragmentation results in complex, brittle workflows where data management is treated as an afterthought. In this presentation, we propose a unified framework for managing the complex lifecycle of data in hybrid AI-HPC systems. We will address the limitations of current domain-specific solutions by introducing abstractions that map the relationships between raw simulation outputs, processed training sets, and surrogate model inference. By creating a system where data provenance and transformation history are persistent, we enable workflows that “learn” from previous executions. Attendees will learn how to design workflows that minimize redundant processing, facilitate cross-domain optimization transfer, and ensure that the massive datasets required for AI training remain accessible, structured, and reusable.
—
Overcoming Parallelism Challenges in Data Analytics Using Sparse Linear Algebra
Giulia Guidi.
Cornell University, US.
Assistant Professor of Computer Science.
The diverse and non-trivial challenges of parallelism in data analytics require computing infrastructures that go beyond the demand of traditional simulation-based sciences. The growing data volume and complexity have outpaced the processing capacity of single-node machines in these areas, making massively parallel systems an indispensable tool. However, programming on high-performance computing (HPC) systems poses significant productivity and scalability challenges. It is important to introduce an abstraction layer that provides programming flexibility and productivity while ensuring high system performance. As we enter the post-Moore’s Law era, effective programming of specialized architectures is critical for improved performance in HPC. As large-scale systems become more heterogeneous, their efficient use for new, often irregular and communication-intensive data analysis computation becomes increasingly complex. In this talk, we discuss how sparse linear algebra can be used to achieve performance and scalability on extreme-scale systems while maintaining productivity for emerging data-intensive scientific challenges. (TBC)
—
Gigawatts of supercomputing
Andrew Jones.
Microsoft, UK.
Future AI Infrastructure, Supercomputing & HPC
Azure Specialized Engineering.
—
Evaluation of AI systems
Emily Casleton.
Los Alamos National Laboratory, USA.
Statistical Sciences Group, CAI-4
—
Updated 16 December 2025
