search instagram arrow-down

Abstracts 2020

Multicore World 2020_logo_large

Final Update: 11 February 2020
DAY 1 – TUESDAY 18 FEBRUARY 2020

AI and HPC Convergence

Trish Damkroger

VP & GM – Data Center / Extreme Computing Group

Intel Corp., San Francisco Bay Area, USA

Opening Keynote – Tuesday 18 February 2020 – 9:20 am

MW20 Trish Damkroger 1

Abstract

The convergence of HPC and AI is enabling powerful ways to tackle previously impossible design, engineering, and scientific challenges – all while driving a paradigm shift in how we design, build, and program the next generation of supercomputing systems. System architectures are evolving to be workflow optimized and powered by a range of heterogenous compute engines to meet the diverse requirements of HPC and AI applications … in a single computing environment.

In addition, the memory-storage hierarchy is being redefined to feed the growing demands of these new computing engines, while innovations in fabric technologies are further enriching the functionality and system capability. In this talk, Trish Damkroger, Vice President in the Data Platforms Group, will discuss this inflection point in high-performance computing and Intel’s software-first strategy, complemented with innovations across hardware and software to accelerate this convergence.


PNNL’s Data-Model Convergence Initiative – 2020 Update

James A. Ang, Ph.D.
Chief Scientist for Computing, Physical and Computational Sciences Directorate
Pacific Northwest National Laboratory (PNNL), Richland, Washington, USA

Keynote – Tuesday 18 February 2020 – 10:40 am

MW20 Jim Ang2

Abstract

PNNL’s Data-Model Convergence (DMC) Initiative was launched in January 2019. The DMC Initiative is pursing integration of high performance computing (HPC) modelling and simulation, data/graph analytics, and domain-aware machine learning computing paradigms on multiple levels.

This five-year initiative is creating the next generation of scientific computing capability through a software and hardware co-design effort at the levels of:

1) heterogeneous workloads,

2) integrated system software stack, and

3) conceptual designs for heterogeneous system-on-chip processors.

This 2020 update will provide an overview of our portfolio of DMC projects in Application Domains, Data Sciences, Software Stack and Hardware Architectures. Computing workflows that use this converged DMC software and hardware architecture support laboratory objectives in accelerating scientific discovery, and real-time control of the power grid.


Speed Up Your Parallel Application Without Doing Much

Ruud van der Pas – Distinguished Engineer, Performance Geek

Oracle Linux and Virtualization Engineering organization

Oracle, Inc. Santa Clara, California. USA / Netherlands

Keynote – Tuesday 18 February 2020 – 1:35 pm

MW20 Ruud vd Pas 12

Abstract

Surprisingly, many developers ignore the low hanging fruit when it comes to performance tuning. Admittedly the word “low” is relative to how tall you are, but as we will demonstrate in this talk, a combination of the right tools and basic insights can deliver significant performance improvements. We will illustrate this using a graph analysis application.


Building an open, safe, accessible AI & HPC ecosystem

Andrew Richards – CEO and co-founder

Codeplay Ltd, Edinburgh, United Kingdom

Tuesday 18 February 2020 – 3:30 pm

Andrew Richards Codeplay 11

Abstract

The world of AI & HPC is dominated by closed, proprietary software models. To get high performance today, systems need accelerators that have high levels of parallelism, but use closed programming models like CUDA. How do we open this up? How do we make these models safe enough to drive a car? How do we get an industry to work together with industry standards? Andrew and Codeplay have been working on these challenges for years. This talk will show the huge progress made today (SYCL, SPIR-V, oneAPI) and where we’re going next.


AI for HPC: Experiences and Opportunities

Anne C. Elster

Director, Heterogeneous and Parallel Computing Lab and Professor of HPC

NTNU, Trondheim, Norway

Keynote – Tuesday 18 February 2020 – 4:30 pm

MW20 Anne Elster 4

Abstract

This talk will be focused on how AI techniques can be used in the development of HPC environment and tools. As larger HPC systems become more and more heterogeneous by adding GPUs and other devices for performance and energy efficiency, they also become more complex to write and optimize the HPC applications for. For instance, both CPU and GPUs have several types of memories and caches that codes need to be optimized for.

We show how AI techniques can help us pick among tens of thousands of parameters one ends up needing to optimize for the best possible performance of some given complex applications. Ideas for future opportunities will also be discussed.


DAY 2 – WEDNESDAY 19 FEBRUARY 2020

Scaling!

Richard O’Keefe

Computer Scientist, PhD in AI

Open Parallel, Dunedin, New Zealand

Wednesday 19 February 2020 – 8:45 am

MW20 ROK 3

Abstract

Scaling is important to multicore, HPC, distributed, and IoT systems.

This talk examines the concept of scaling from several perspectives, algorithmic, biological, and social.

It touches on

  • the paradox of unexpected emergent behaviour in a world where scaling laws apply consistently over many orders of magnitude
  • hardware bugs
  • how can we build resilient systems in an environment we cannot understand.

Inexact computing: what can you get away with, and how?

Laura Monroe

Senior Research Scientist – Ultrascale Systems Research Center

Los Alamos National Lab (LANL), New Mexico, USA

Wednesday 19 February 2020 – 9:20 am

MW20 Laura Monroe 3

Abstract

Inexact computing covers both probabilistic and approximate computing. Probabilistic computation is non-deterministic, and the results are hopefully statistically “correct enough”. Approximate computation is deterministic, and produces a result that is hopefully “close enough”.

Both of these are present in the late-CMOS era: probabilism can come from increased faults or may be inherent to the processor, and approximation may be due to architectural word length limits or numerical methods. Smaller feature sizes permit more faults, certain emerging processors have inherent probabilism, and current processors provide a range of different precisions.

If you know you aren’t always going to get the right answer, when do you care and why? How do you know that your answer is “good enough”? And what do you do about it if it isn’t? (What does “good enough” even mean?) This is a change in how we think about computation. Known mathematical error-correction methods may not suffice under these conditions, and an ad hoc approach will not cover the cases likely to emerge, so mathematical approaches will be essential.

We will discuss the mathematical underpinnings behind such approaches, illustrate with examples, and emphasize the interdisciplinary approaches that combine experimentation, simulation, mathematical theory and applications that will be needed for success.


The Pegasus Workflow Management System: Current Applications and Future Directions

Ewa Deelman, Research Director, Science Automation Technologies Division

University of Southern California, Information Sciences Institute, Los Angeles, CA, USA

Keynote – Wednesday 19 February 2020 – 10:40 am

MW20 Ewa Deelman 4

Abstract

The Pegasus Workflow Management System is designed to meet the needs of a wide variety of scientific applications. It automates the execution of complex and large-scale workflow task graphs operating on large amounts of data.  Since 2001 Pegasus has been working with a number of applications such as LIGO, the gravitational-wave physics experiment, to enable them to accomplish their scientific goals. In 2016, Pegasus was used by LIGO to analyze their experimental data, confirming the first ever direct detection of a gravitational wave. Pegasus also delivers robust automation capabilities to researchers at the Southern California Earthquake Center (SCEC) studying seismic phenomena, to astronomers seeking to understand the structure of the universe, to material scientists developing new drug delivery methods, and to students seeking to understand human population migration.  An example of societal impact is SCEC’s use of Pegasus to generate the world’s first physics-based probabilistic seismic hazard map that provides insight into why earthquakes in the Los Angeles basin can be so destructive. This information can inform civil engineering practices in the area.

This talk focuses on the current Pegasus capabilities and describes new research directions that will inform future Pegasus development.


Complex Problems Actually Have Complex Solutions: Data and Processing Challenges for New Zealand

Lev Lafayette

HPC SysAdmin

University of Melbourne, Australia

Wednesday 19 February 2020 – 1:45 pm

MW20 Lev L 2

Abstract

A continuing issue in the field of computing is our capacity to store, transfer, and processing increasingly large datasets and increasingly complex problems. This is, of course, a fundamental reason why there has been developments in multicore computing and in various implementations of parallel programming. New Zealand is by no means immune to these changes and faces a number of big data and complex problem issues in its own right, especially relating to geography and climate, which by necessity have enormous impacts on the economy. Yet, almost parallel to this many IT managers express a desire for applications that are feature-rich but easy to utilise whilst politicians often engage with the problems an assumption of stability (the fate of the Melomys rubicola is a particularly pertinent case). Unsurprisingly to those coming from an engineering perspective, these desires and assumptions are fraught with problems.

Despite some impressive technical developments over the years to improve performance and clarity, we keep coming back to the fundamental issue that complex problems actually have complex solutions, and avoiding massive failures in information systems and research reproducibility requires infrastructure, quality assurance, and training. Examples for a New Zealand context are provided.


High Performance Computing in (South) Africa?

Werner Janse van RensburgResearch Manager

Centre for High Performance Computing (CHPC)

Cape Town, South Africa

Wednesday 19 February 2020 – 2:20 pm

MW20 Werner Janse 7

Abstract

HPC adoption and provisioning is well embedded across 1st world regions. This provides for established, tried-and-tested, engagement models in which e.g. research collaborations, customer-vendor interactions, skills development, ample opportunity for peer engagement and common understanding of what it takes to be successful with HPC come (quite) naturally.

Where does Africa, and in particular South Africa, fit into the global HPC picture? Is there an HPC footprint? Is there momentum established that provides opportunities for HPC development that is possibly unique compared to the tried-and-tested approaches? Should Africa be given a second thought when opportunities for HPC are explored?

In this talk Werner will be providing a realistic look into significant progress made with HPC in South Africa over more than a decade, and how this momentum has evolved to impact a number of African countries. Practical successes and challenges, sometimes associated with resource-constrained environments, will be highlighted. A strong argument will be made that ample opportunities for HPC exists in Africa, although these opportunities may be different to established norms.


Operating System needs and futures for Connected Devices

Boyd Multerer, Founder & CEO

Kry10 Industries, Wellington, New Zealand

Wednesday 19 February 2020, 3:30 pm

MW20 Boyd Multerer 2

Abstract

The world of Connected Devices (aka IoT) has multiple fundamental differences from general purpose computers such as PCs, Servers, and even phones. Devices that exist in the field, whether driving infrastructure down the block, alone in the ocean, or up in orbit, nave needs in security, robustness, flexibility, management and more that are not the same as provided by the standard Operating Systems today.

In this talk, Boyd discusses how Connected Devices are different general purpose computers, what their needs are, and (for the first time) outlines a new Operating System that he is developing here in Wellington, New Zealand – built on the seL4 Microkernel.


The formally verified seL4 microkernel – present and future

Gernot Heiser

Scientia Professor and John Lions Chair of Operating Systems at UNSW Sydney

Chief Research Scientist at CSIRO’s Data61Australia

Keynote – Wednesday 19 February 2020 – 4:15 pm

MW20 Gernot 3

Abstract

seL4 is the world’s first operating system (OS) kernel with a formal, machine-checked proof of implementation correctness, originally on Arm v6 processors. Since that initial work ten years ago, we have added proofs of security enforcement and timeliness properties and extended verification to x86 and RISC-V architectures. To this date, seL4 is not only the most comprehensively verified OS, but has a strong performance focus and evolves (with proofs) to address a widening class of real-world use cases.

This talk will provide a brief overview of the present state of seL4 and its verification story, including multicore support. I’ll focus on recent enhancements, in particular advanced mechanisms for supporting mixed-criticality real-time systems. I will also cover on-going work on time protection, a fundamental approach for preventing information leakage through timing channels.


DAY 3 – THURSDAY 20 FEBRUARY 2020

All Tomorrow’s Memories

(with apologies to Lou Reed)

Bruce Jacob – Keystone Professor of Electrical and Computer Engineering

University of Maryland, College Park, MD, USA

Thursday 20 February 2020, 8:45 am (remote)

MW20 B Jacob 1

Abstract

Memory and communication are the primary reasons that our time-to-solution is no better than it currently is … the memory system is slow; the communication overhead is high; and yet a significant amount of research is still focused on increasing processor performance, rather than decreasing (the cost of) data movement. I will discuss recent & near-term memory-system technologies including high-bandwidth DRAMs and nonvolatile main memories, as well as the impact of tomorrow’s memory technologies on tomorrow’s applications and operating systems. Modern multicore and manycore designs exacerbate the problem, but two solutions are on the horizon.


Towards Dynamic Resource Management in Next Generation HPC Environments

Balazs Gerofi – Research Scientist

System Software Research Team

RIKEN Center for Computational Science (RIKEN-CCS) – Tokyo, Japan

Thursday 20 February 2020 – 9:30 am

MW20 Balazs 1

Abstract

Workload diversity in high-performance computing (HPC) environments has experienced an explosion in recent years. The increasing prevalence of Big Data processing, in-situ analytics, artificial intelligence (AI) and machine learning (ML) workloads, as well as multi-component workflows is pushing the limits of supercomputing systems that have been primarily designed to serve parallel simulations.

In addition, with the growing complexity of the hardware there is also a growing interest for multi-tenancy and for a more dynamic, cloud-like execution environment. All these trends bring together a large variety of runtime components that do not cooperate well with each other, which in turn can lead to suboptimal performance.

This talk will enumerate a number of representative workloads that stress the limitations of the traditional HPC center. We then highlight some of the underlying forces which shape requirements of next generation systems and propose a cross-stack coordination layer that aims to resolve these conflicts. Finally, through some of our previous efforts in this space we demonstrate the benefits of the overall approach.


Addressing Challenges in Data movement and Communication

Samantika Sury – Principal Engineer

Intel Corp., Westford, Massachusetts, USA

Thursday 20 February 2020 – 10:45 am

MW20 Samantika 9

Abstract

While the last decade of computer architecture has established many novel compute solutions we find that application performance is often dominated by data movement.

The convergence of HPC, AI and analytics and emergence of edge computing has furthered the trend of applications needing to access large amounts of memory fast and efficiently and with low energy.

In this talk we demonstrate the performance and power impact of data movement on key parallel applications and explore architectural solutions like tightly coupled heterogeneity, moving compute to data and adaptive hardware to address the performance challenges due to data movement.


Preparing for Extreme Heterogeneity in High Performance Computing

Jeffrey S. Vetter

Distinguished R&D Staff Member. Leader, Future Technologies Group

Oak Ridge National Laboratory (ORNL), Tennessee, USA

Keynote – Thursday 20 February 2020, 1:45 pm

MW20 VEtter 2

Abstract

While computing technologies have remained relatively stable for nearly two decades, new architectural features, such as heterogeneous cores, deep memory hierarchies, non-volatile memory (NVM), and near-memory processing, have emerged as possible solutions to address the concerns of energy-efficiency and cost.

However, we expect this ‘golden age’ of architectural change to lead to extreme heterogeneity and it will have a major impact on software systems and applications. Software will need to be redesigned to exploit these new capabilities and provide some level of performance portability across these diverse architectures.

In this talk, I will sample these emerging technologies, discuss their architectural and software implications, and describe several new approaches (e.g., domain specific languages, intelligent runtime systems) to address these challenges.


Why aren’t we there yet?  The journey to Exascale COTS computing

Duncan Hall – IMD Strategy and Planning Manager

Ministry of Foreign Affairs & Trade, Wellington, New Zealand

Thursday 20 February 2020 – 2:30 pm

B1166212-596F-4EC8-BF10-6A8A567806D7_1_105_c

Abstract

The Green500 list, published biannually alongside the Top500 list, ranks the energy efficiency of the top 500 or so supercomputers (whose data is made public) by FLOPS per Watt.

I continue to analyse Green500 data to forecast likely trajectories towards Exascale (~10^18 FLOPS) Commercial Off The Shelf (COTS) computing.


Machine Learning Needs HPC

Barbara Chapman

Director, Computer Science and Mathematics

Brookhaven National Laboratory (BNL), New York, USA

Thursday 20 February 2020 – 3:30 pm

MW20 Chapman 7

Abstract

Machine Learning, especially Deep Learning, is deployed in increasingly sophisticated scenarios, enhancing traditional scientific computations, reducing the data storage needs of large-scale experiments, and processing data in situ from observations and experiments.

HPC expertise is exploring ways to improve the performance of AI without sacrificing accuracy. Can this help in our quest for convergence?


Fugaku — A Centerpiece for the Japanese Society 5.0

Satoshi Matsuoka

Director, Riken Center for Computational Science (R-CCS)

Professor, Tokyo Institute of Technology

Tokyo, Japan

Closing Keynote – Thursday 20 February 2020 – 4:00 pm

MW20 SAtoshi 7

Abstract

Fugaku is not only one of the first ‘exascale’ supercomputers of the world, but also is slated to be the centerpiece for rapid realization of the so-called Japanese ‘Society 5.0’ as defined by the Japanese S&T national policy.

Indeed, the computing capacity of Fugaku is massive, almost equaling the compute capabilities of the aggregate of all the servers (including those in the cloud) facilitated in Japan (approximately 300,000 units), but at the same time, is a pinnacle of the Arm ecosystem, being software compatible with billions of Arm processors sold worldwide in smartphones to refrigerators, and will run standard software stack as is the case for x86 servers.

As such, Fugaku’s immense power is directly applicable not only to traditional scientific simulation applications, but can be a target of Society 5.0 applications that encompasses conversion of HPC & AI & Big Data as well as Cyber (IDC & Network) vs. Physical (IoT) space, with immediate societal impact. A series of projects and developments have started at R-CCS and our partners to facilitate such Society 5.0 usage scenarios on Fugaku.


Featured image – Stephan Friedl – Cisco – USA. Multicore World 2012

Photo Credit: Open Parallel Ltd


2Y1A9974

Multicore World 2017 -Some speakers and participants: Pete Beckman, Victoria Maclennan, Dave Jaggar, Michael Kelly, Nathan DeBardeleben, John Gustafson, Andreas Wicenec, JC Guzman, Balasz Gerofi, Satoshi Matsuoka, Guy Kloss, Tony Hey, Paul McKenney, Piers Harding, Michelle Simmons, Duncan Hall and others.

Check Multicore World 2019 abstracts here

Check Multicore World 2018 abstracts here

%d bloggers like this: