search instagram arrow-down

Laura Monroe

laura-monroe-2

Bio

Laura is a researcher in resilience and novel computing techniques, especially inexact computing. Her current interest is the design of algorithms and systems to address inexact systems. Another interest is the application of discrete mathematics to the design and understanding of computing systems, especially as the industry moves into the late- or post-Moore’s-Law era.

Laura is a mathematician and received her Ph.D. in the theory of error-correcting codes, working with Dr. Vera Pless. After her degree, she worked at NASA Glenn, then joined Los Alamos National Laboratory in 2000. She led LANL’s Production Visualization project, and was the project leader of the redesign of the LANL large-scale visualization corridor, encompassing computing systems, networking, and virtual reality and other display systems. She also contributed on the design teams for the LANL Cielo and Trinity supercomputers.

She now works at LANL’s Ultrascale Systems Research Center, where she originated and leads the Laboratory’s inexact computing project, looking at both probabilistic and approximate computation to support LANL mission activities.

She has published in the areas of probabilistic computing and algorithms, mathematics, resilience, error-correcting codes, virtual reality and visualization. She has received several Defense Program Awards of Excellence, several LANL Distinguished Performance Awards, an R&D 100 award as part of the PixelVizion team, and one of the 2019 NM Women in Technology Awards.

Linkedin

Ultrascale Systems Research Centre at Los Alamos

Women in Tech Awards 2019


 

Inexact computing: what can you get away with, and how?

Laura Monroe

Senior Research Scientist – Ultrascale Systems Research Center

Los Alamos National Lab (LANL), New Mexico, USA

Wednesday 19 February 2020 – 9:20 am

 

Abstract

Inexact computing covers both probabilistic and approximate computing. Probabilistic computation is non-deterministic, and the results are hopefully statistically “correct enough”. Approximate computation is deterministic, and produces a result that is hopefully “close enough”.

Both of these are present in the late-CMOS era: probabilism can come from increased faults or may be inherent to the processor, and approximation may be due to architectural word length limits or numerical methods. Smaller feature sizes permit more faults, certain emerging processors have inherent probabilism, and current processors provide a range of different precisions.

If you know you aren’t always going to get the right answer, when do you care and why? How do you know that your answer is “good enough”? And what do you do about it if it isn’t? (What does “good enough” even mean?) This is a change in how we think about computation. Known mathematical error-correction methods may not suffice under these conditions, and an ad hoc approach will not cover the cases likely to emerge, so mathematical approaches will be essential.

We will discuss the mathematical underpinnings behind such approaches, illustrate with examples, and emphasize the interdisciplinary approaches that combine experimentation, simulation, mathematical theory and applications that will be needed for success.

SLIDES

VIDEO

%d bloggers like this: