search instagram arrow-down

Abstracts 2024

Updated 9 February 2024.

Check Programme here

DAY 1 – Monday 12 February 2024

Manish Parashar – SCI Institute Director and Presidential Professor of Computer Science, University of Utah. US.

Open and equitable access to scientific data is essential to addressing important scientific and societal grand challenges, and to the research enterprise more broadly. This talk will discuss the importance and urgency of open and equitable data access and explores the barriers and challenges to such access. It then introduces the vision and architecture of the National Data Platform project aimed at catalyzing an open, equitable and extensible data ecosystem, and highlights key usecases.


Ruud van der Pas – Senior Principal Software Engineer, Oracle Linux Engineering. The Netherlands.

There are many misconceptions about application performance tuning and unfortunately they are often not only outdated, but sometimes plain wrong from the start. They are also rather persistent.

We start with a biased overview of the main misconceptions and share the presenter’s take on them.  This is followed by a coverage of the real challenges that those interested in application performance tuning face.

Many applications have low hanging fruit when it comes to improving the performance, but the definition of low depends on how tall you are. We conclude with a case study that demonstrates this.  While the fixes are easy, without the proper knowledge there would be no way to get the remarkable performance improvement realized.


Ian FosterSenior Scientist and Distinguished Fellow. Director, Data Science and Learning Division, Argonne National Laboratory (ANL). Arthur Holly Compton Distinguished Service Professor of Computer Science at the University of Chicago. US/New Zealand.

We are on the verge of a global communications revolution based on ubiquitous high-speed optical, 5G, 6G, and free-space optics technologies. The resulting communications fabric can enable new ultra-collaborative research modalities that pool sensors, data, and computation with unprecedented flexibility and focus. But realizing these new modalities requires that we overcome the friction that impedes actions that traverse institutional boundaries. The solution, I argue, is new global science services to mediate between user intent and infrastructure realities. I describe our experiences building and operating such services, and present examples of their application and use.


Check Programme here

DAY 2 – Tuesday 13 February 2024

John Reid – Programme Leader – Kaitiaki Intelligence Platforms. New Zealand

Jay Whitehead – Director, Matatihi. New Zealand

The rapid advancement and decreasing costs of sensing technologies hold the promise of automating environmental reporting, ushering in a new era of transparency for businesses, industry sectors, and national economies. Concurrently, there is a global trend towards the establishment of comprehensive environmental reporting frameworks and standards for businesses, with a shift from voluntary to mandatory reporting requirements at the national level. This talk delves into the development path and strategy of the Eco-index, which has devised a suite of tools geared towards facilitating the future automation of biodiversity reporting and the valuation of ecosystem services. Leveraging geospatial tools, the Eco-index has successfully crafted science-based targets and mapping suites for biodiversity restoration at national, regional, and catchment scales. Additionally, through the utilization of remote sensing technologies and artificial intelligence (AI), the Eco-index is automating the process of biodiversity detection. This breakthrough enables the measurement of progress toward set targets and offers a means of verifying ecosystem restoration efforts, thus paving the way for the development of future biodiversity credit systems. A pivotal factor contributing to the success of the Eco-index has been the formulation of a catchment-oriented approach that can be readily applied at highly localized levels, such as the farm scale. This accomplishment has been made possible through the collaboration of an interdisciplinary team of innovators and the active involvement of farmers, industry representatives, iwi, and government stakeholders in the testing and refinement of tools.


Jess Robertson – Chief Scientist, Data Science & HPC, NIWA, New Zealand. (TBC)


Estela Suárez – Head of Next Generation Architectures and Prototypes Group, Jülich Supercomputing Center. Jülich, North Rhine-Westphalia, Germany.

Idling components on HPC systems constitute a waste of energy, money and natural resources.
Energy can be saved if those components are shut down while not being used, e.g. through intelligent and dynamic power down/up mechanisms. However, the natural resources consumed to produce those devices (and their procurement costs) cannot be recovered this way. Therefore, to improve the overall energy efficiency of HPC systems it seems more reasonable to ensure that all components are put to good use at each moment in time – this will maximize the system throughput by ensuring maximum utilization.

The first question to be asked is therefore: how efficiently are HPC systems used in current practice?
Application restructuring and optimization could bring a significant contribution here, but the volume of legacy code to be tackled means that decades will pass before such improvements become significant on the global HPC scale. What can we do in the meantime? Is there potential for improvement from the system software/middleware and administration side alone? This talk will present an overview of the status quo and collect ideas for future improvements, taking technical limitations and implementation challenges into account.


Doug Kothe – Chief Research Officer and Science and Technology Executive for Sandia National Laboratories (SNL), US.

The US Department of Energy (DOE) Exascale Computing Project (ECP) has just “crossed the finish line” by far exceeding expectations relative to its targeted Key Performance Parameters  (KPPs) of application capability and performance, functional and portable software stack, and  investment in node and system hardware design for the first three exascale systems in the US.  Early results and expected outcomes will be highlighted for ECP’s mission need application  projects – each addressing an exascale challenge problem and ECP’s Extreme Scale Scientific Software Stack (E4S) that includes advanced mathematical libraries, extreme-scale  programming environments, development tools, visualization libraries, and the software infrastructure to support large-scale data management and data science for science and security applications. ECP has also more than “dabbled” in AI, as the three exascale systems collectively offer >100K GPUs, with each system poised to be game-changing training and inference engines for DOE foundational models in science, energy, and national security. Early and promising results in AI model training performance will be highlighted on the Frontier system at Oak Ridge National Laboratory (ORNL). Examples of possible far-reaching outcomes for AI systems in DOE mission space in post-ECP programs will also be given.


Rob Lindeman – Professor / Director, HIT Lab NZ. University of Canterbury. New Zealand

Despite the most recent hype about the Metaverse, virtual reality (VR) has been an “emerging” technology for more than 30 years. So, why hasn’t it fulfilled its promise as a transformational technology? Are we finally there yet?

In this talk, I will provide some insights about our current efforts at HIT Lab NZ on helping people design proper VR experiences that support long-term and regular use of VR. I will provide specific ideas to help app developers deliver solutions for workers to train more effectively, or data analysts to carry out immersive data analytics work, without having to cut their sessions short due to discomfort. I will present concrete applications we have made using heuristics we have identified so far.


Nicolás Erdödy – Director, Open Parallel Ltd – Project Leader, Listen to the Land. New Zealand.

Concerns over climate change and sustainable agriculture have made nation-wide high resolution environment monitoring and modelling desirable. Recent developments in technology have made it affordable. An environment modelling network is a supercomputer, but not of a familiar kind. Conventional supercomputing approaches are appropriate for the modelling aspect, but not the monitoring aspect. While sensor networks are familiar in the Internet of Things (IoT), geographically remote sensors without access to mains power have harsher resource constraints than, say, internet-ready light bulbs. A “two-realm” approach to system software is needed.

This talk will summarise the Listen to the Land (Whakarongo ki te whenua) project, update its progress in the past year and propose a set of questions to discuss with the audience (Q&A plus un-conference day) . 


Dhabaleswar K. (DK) Panda – The Ohio State University

Artificial intelligence (AI) is transforming many sectors of society, such as agriculture, transportation, autonomous vehicles, and biodiversity. However, there is a massive and ever-growing gap between available AI techniques and their availability to end users across a range of application domains. Existing AI applications are developed in a largely ad-hoc manner, lacking coherent, standardized, modular, and reusable infrastructure. This talk will start with an overview of the ICICLE (Intelligent CyberInfrastructure (CI) with Computational Learning in the Environment), an NSF-AI Institute, to address these challenges. Next, we will focus on a set of specific challenges being addressed within ICICLE to achieve AI-enabled digital agriculture. Some of these challenges include semi-supervised learning with good accuracy to detect crop diseases with a fraction of available data, aerial crop scouting through UAVs for real-time detection of crop diseases, and design of edge-to-cloud/HPC AI-as-a-service. Detailed solutions to these challenges and the available software releases to democratize AI in digital agriculture will be presented.


Elle Archer – Chair – Te Matarau – The Māori Tech Association, New Zealand

In ‘A Code of Legacy’, we explore the intersection of Societal Philosophy and Cultural Knowledge within the realm of Multicore’s key theme. This presentation delves into how traditional wisdom and contemporary digital innovations can coalesce to shape a future where technology not only advances human capability but also honours and preserves cultural heritage. By weaving together threads of Māori, omni-cultural principles, and digital technologies, such as AI, sustainability, quantum computing, and cybersecurity, we propose a unique framework for technological development and consideration that is deeply rooted in cultural values centred, around our possible future, our legacy. This talk aims to redefine the narrative of technological progress, emphasising inclusivity, ethical considerations, and a holistic approach to digital and societal advancement.


Karen Willcox – Director, Oden Institute for Computational Engineering and Sciences. Professor, Aerospace Engineering and Engineering Mechanics, The University of Texas at Austin. US/New Zealand.

A recent National Academies study defines a digital twin as “A a set of virtual information constructs that mimics the structure, context, and behavior of a natural, engineered, or social system (or system-of-systems), is dynamically updated with data from its physical twin, has a predictive capability, and informs decisions that realize value. The bidirectional interaction between the virtual and the physical is central to the digital twin.” This talk will summarize key findings and insights from the study, with a particular focus on highlighting the opportunities for digital twins to revolutionize decision-making across scientific and engineering applications.


Check Programme here

DAY 3 – Wednesday 14 February 2024

David Brebner – CEO, Umajin. New Zealand

This talk will focus on improving data for 3D representations of scenes, Digital Twins and more broadly Neural Computations.  The discussion will show how new approaches to data can reduce computation for many types of AI significantly.

Specifically for images and scenes I’ll show better and more sparse ways to represent 2D images and 3D scenes by representing the way light moves around the scene instead of ‘flat pixels’. 

Better data is already revolutionizing LLM’s and I’ll outline approaches to both improving training data sets and improving validation data sets.


Robert Wisniewski – Senior Vice President and Chief Architect of HPC, Head of Samsung’s SAIT Systems Architecture Lab, US.

Since processing speed has discontinued its inexorable climb, achieving the next HPC generation expected perform acne increase has been challenging. Yet, the HPC community has continued on an exponential curve exponentially exceeding Moore’s law. In this talk, I will examine architectural directions we are exploring to help us continue achieving that superexponential. I will describe three directions. The first, is memory coupled compute – bringing memory and compute much closer together. A second direction is tightly coupling general purpose cores and accelerators. And the third is tightly coupling many nodes into a supernode. For each of these I will provide motivation describing why they are valuable and then share the architectural directions we are exploring.


Adrian Cockcroft – Partner, OrionX. Former VP, AWS. Former Cloud Architect, Netflix. US.

The hardware that is optimized for training large language models represents the most extreme configurations that have eve been commercially sold as systems and cloud instances. For example, the AWS p5.48xlarge instance has 192 Intel vCPUs, two terabytes of RAM, eight NVIDIA H100 GPUs with 640GBytes of HBM connected via NVSwitch at 900GBytes/s, and 3200 Terabits of network connectivity. 900Gbytes/s is about one megabyte per microsecond, and 3200 Terabits/s is 3.2 Megabits/microsecond, a megabyte in about 3 microseconds.

This talk will dig into the layers of code that implement LLM training communication to see how it leverages the hardware, and begin to explore how it might map to next generation CXL based architectures.


Ewa Deelman – Research Director, Science Automation Technologies Division at Information Sciences Institute, University of Southern California. US.ad

This talk will briefly describe the Pegasus Workflow Management System and then describe the research in using AI techniques for anomaly detection during workflow execution. As workflows are executing in distributed environments, faults and anomalies (slowdowns in execution for example) can often occur. The talk describes the methodology for data collection to support the modeling of anomalies and the techniques used for the modeling.  The talk concludes with future directions for the research.


Alok N. Choudhary – Harold Washington Professor, ECE and CS Department, Northwestern University. US.

How can AI help accelerate knowledge discoveries and exploration of design spaces. An example of this is learning from data to build predictive models that can enable exploration of scientific questions without relying upon underlying theory or even domain knowledge. Another example is the acceleration of so called the “inverse problems” which explore the design space based on desired properties. For example, can AI learn basic chemistry from data? Or how can AI replace or reduce the need for expensive simulations or experiments to perform discoveries quickly or evaluate a feasible design space? This talk will present some learnings that address some of the questions above using various materials design and discovery examples.


Adolfy Hoisie – Chair, Computing for National Security Department, Brookhaven National Laboratory, US.

The talk will focus on recent advances aiming at developing AI techniques for architecture modeling and simulation (ModSim), as an alternative to the existing ModSim methods and tools. Specifically, the ability of the new quantitative codesign methods to cope with heterogeneous architectures for complex workflows will be emphasized. A new frontier of AI-based ModSim applicability to codesign of complex systems in a dynamic regime will be discussed.


Will Kamp – Consulting HPC, FPGA Engineer – SKA Central Signal Processors, Kamputed Ltd. New Zealand.

The Square Kilometer Array is finally in the construction phase, with the first receptors iminent. Five years remain to deliver the two telescopes, each with just 10% of the full million square meters of antenna collecting area. Even at this limited scale the computing challenges are significant, with raw input data rates of 2.5TB/s, 24 hrs/day, 365.25 days/year.
I will talk about the size of the problem, how the design of the correlator is partitioned, so that it can be scaled through the delivery milestones to reach 197 receptors.
As usual in large projects there is extreme pressure to minimise cost and maximise utility.
We will explore some methods I employ in the Mid.CBF FPGA based correlator to maximise the computing performance while minimising cost.


Satoshi Matsuoka – Director, Riken – Center for Computational Science (R-CCS). Prof., Tokyo Inst. Japan.

At RIKEN R-CCS, the legacy of Fugaku, our flagship supercomputer, is just the beginning. We’re embarking on an ambitious journey to redefine the landscape of high-performance computing, with a keen focus on societal impact and scientific innovation. Our roadmap includes several groundbreaking projects that promise to elevate our capabilities and contributions to unprecedented levels. Central to our strategy is the “AI for Science” initiative, a project that places artificial intelligence at the heart of scientific research. This endeavor aims to harness the power of AI to decipher complex data, accelerate discovery processes, and provide deeper insights across various scientific domains. By integrating AI with supercomputing, we’re not just enhancing computational efficiency; we’re transforming the very paradigm of scientific exploration. In parallel, we’re excited about the development of “FugakuNEXT,” the successor to Fugaku. This next-generation supercomputer will incorporate advanced technologies, including innovative memory solutions designed to drastically reduce the energy consumption associated with data movement, a critical challenge in scaling supercomputing capabilities. Moreover, our commitment to expanding the frontiers of computability extends to the realm of Quantum-HPC Hybrid computing. This pioneering project aims to merge the quantum computing’s unique capabilities with the robust power of traditional high-performance computing, opening new avenues for solving previously intractable problems. Recognizing the importance of accessibility and flexibility in computing resources, we’re also integrating our supercomputing assets with cloud platforms, notably AWS. This strategic move will democratize access to supercomputing power, enabling a broader range of researchers to tackle pressing global challenges with greater agility and scalability. Together, these initiatives represent RIKEN R-CCS’s vision for the future—a future where supercomputing is not just about raw computational power, but about enabling a more profound understanding of the natural world, driving innovation, and contributing solutions to some of the most pressing issues facing humanity today.


Check Programme here

DAY 4 – Thursday 15 February 2024

Andrew Jones Leader, Future Supercomputing & AI Capabilities, Microsoft.

This talk will explore some predictions about how computing might evolve over the next decade or more, with a particular focus on AI (e.g., generative AI and LLMs) and science use cases, from local scale to the largest scales of supercomputing. The talk will cover both technology and human aspects. Some of these predictions are grounded in exploratory work or actual projects underway, others are based on personal observations and speculations, including from a decade of providing HPC Leadership training.


Jim Ang Chief Scientist, PNLL. US

One of the challenges with semiconductor computing investments is the time and cost associated with developing innovative hardware concepts vs development of innovative software concepts.  The timelines and costs associated with software development are much lower, and thus attract much more venture capitalist (VC) investment. This is one of the high impact places where the capabilities of the U.S. CHIPS Act National Semiconductor Technology Center (NSTC) provides an infrastructure to reduce the time and cost to design and develop prototype computing hardware. These infrastructure capabilities can then change the priorities that VCs consider for semiconductor innovations.


In my presentation, I share a vision of greatly lowered barriers to innovation. Where 100x reductions in time and cost to develop prototype computing hardware designs are feasible. To Cross the Valley of Death we need to have intermediate proof-of-concept test hardware that can be produced in small volume by the NSTC’s prototyping enablement capabilities.  While these test hardware designs could be developed with a product grade hardware design infrastructure; a better solution is agile, less expensive design tools that have sufficient fidelity to co-design prototype grade evaluation hardware for the computing domain, and that can be tested with co-designed software. Then it may be possible to imagine that a test hardware design strategy that leverages a collection of a dozen hardware designs that help inventors explore the design space.  This collection of design space exploration prototype hardware can be evaluated with supporting software to identify the “best” design to move forward for the next design gate, or for candidate VC investment, or Government Program Sponsor support for NRE and product-grade EDA team investment.


Thuc Hoang – Director, Office of Advanced Simulation and Computing & Institutional Research and Development Programs, National Nuclear Security Administration, Department of Energy, US.

Simon Hammond – Federal Program Manager, Office of Advanced Simulation and Computing & Institutional Research and Development Program, National Nuclear Security Administration, Department of Energy, US.

The Advanced Simulation and Computing (ASC) program, in the U.S. Department of Energy (DOE) National Nuclear Security Administration (NNSA), is responsible for providing high-performance simulation capabilities and computational resources for the NNSA Stockpile Stewardship Program.  Within recent years, ASC has experienced major disruptive computing architecture changes which cause numerous code performance and portability challenges. Responding to the 2023 National Academy of Science, Engineering, and Medicine’s Post-Exascale report, we will present ASC’s near-term plan that addresses the challenges and recommendations discussed in the report.
Partnering with industry to do hardware/software research and development, field testbeds and prototypes, and procuring high-performance computing platforms – all to keep pace with the technology changes – will continue because they allow the ASC program to anticipate and prepare for future technological disruptions. Sustained, tightly coordinated collaborations with industry partners, U.S. agencies, and international organizations will enable the optimal use of these technologies to meet the NNSA stockpile stewardship requirements.


Duncan Hall IMD Strategy & Planning Manager, Ministry for Foreign Affairs & Trade (MFAT), New Zealand.

Prime facie, the SABSA (Sherwood Applied Business Security Architecture) method should be useful to address the issues (including identifying asset values, system threats, control measures, investment rationale, and recommendations to decision-makers) discussed in NIST SP 800-223 “High-Performance Computing (HPC) Security: Architecture, Threat Analysis, and Security Posture.”

This presentation outlines how SABSA, an industry developed and practised information security planning method, can be used to inform optimal selection of controls to address the most important cybersecurity risks in HPC environments.


Fernanda Foertter – Director of HPC, Voltron Data. US

While the HPC community has been busy pushing the flops cross over the exa barrier, we’ve done a poor job at building systems that lower data friction. One could even say that our community has accepted locked-up data as a fact of life. Many datasets are either format or vendor-locked, and we spend an uncomfortable amount of time creating glue-code to transfer and make data useful.   Not only is it difficult for tools and systems to communicate with each other, but once data does arrive at its destination we still have a lot of work to do to make that data useful in HPC, and AI. Luckily we have the opportunity to unite HPC and AI if only data standards existed and data systems could communicate over a unified protocol. This talk will explore what a world without data friction looks like if we are willing to band together and agree over data standards.


Pete Beckman – Director, Northwestern University / Argonne Lab Institute for Science and Engineering at Argonne National Laboratory. US.

The world is excited about conversational AI, hallucinations, and deep fakes. It seems we are all AI curious. While most of our attention has been focused on capable multimodal large language models that include text, image, and audio data, there are scientists exploring the small… the AI that can be squeezed, shrunk, and crammed into ever smaller scientific instruments and devices. As companies continue to spend hundreds of millions of dollars training the largest models, how will scientists explore the tiny, AI models that can fit in a microscope, an audio recorder, or a cloud camera? While the largest supercomputers on the planet are exciting, like the All Blacks beating Australia, it is the other end of the computing continuum that might transform how we instrument and study the impacts of climate change on our planet. The National Science Foundation (NSF) has funded several projects to build national cyberinfrastructure, open for all scientists, to explore the AI-enabled computing continuum. The Sage (sagecontinuum.org) infrastructure allows scientists to deploy AI algorithms to the edge (AI@Edge). The infrastructure allows computer scientists to explore AI approaches such as federated learning and self supervised learning as well as bi-directional interactions between instruments and computation. The Sage cyberinfrastructure is now part of the NSF pilot program building a National AI Research Resource (NAIRR) for scientists across the nation. Sage testbeds have been deployed in California, Montana, Colorado, and Kansas, in the National Ecological Observatory Network, and in urban environments in Illinois and Texas. These resources provide opportunities for scientists (both in computer science and other domains) to explore the computing continuum, from the smallest of AI to the largest. This talk will explore AI in the small connected to AI in the large — the computing continuum for scientific discovery.


Ron Brightwell – Department Manager, Scalable Systems Group, Sandia National Laboratories. US.

Over the last three decades, high-performance computing (HPC) systems have evolved from highly-specialized hardware running custom software environments to platforms that are almost entirely composed of commodity components. While some aspects of large-scale HPC systems continue to be enhanced to meet performance and scalability demands, HPC systems have been distinguished by interconnect technology. The emergence of cloud computing and hyperscalers has led to the deployment of massive data centers containing systems much larger than the fastest HPC systems. But, until recently, these systems were easily differentiated from HPC machines by the use of commodity ethernet networks. It appears that these two worlds are now converging and may be headed towards a common solution. This talk will describe how interconnect hardware and software technologies for HPC systems has been impacted by cloud computing and offer a perspective on future challenges that will need to be addressed to ensure that interconnect technology continues to meet the requirements of extreme-scale HPC systems.


Bill Magro – Chief Technologist, High Performance Computing, Google. US.

Artificial Intelligence (AI) and cloud computing are poised to transform scientific discovery and engineering innovation. However, this potent combination also has the potential to disrupt the HPC landscape.

Will AI and Cloud ultimately propel HPC to new heights, or will they reshape the industry, displacing traditional HPC solutions?

In this talk, we will examine how the race to Exascale changed the trajectory of both AI and Cloud in perhaps unexpected ways, setting off a series of events that are now making the future of HPC both exciting and uncertain.


Tickets available here

Check Programme here