Exascale Scientific Applications: Scalability and Performance Portability

· ·
· CRC Press
Ebook
608
Pages

About this ebook

From the Foreword:

"The authors of the chapters in this book are the pioneers who will explore the exascale frontier. The path forward will not be easy... These authors, along with their colleagues who will produce these powerful computer systems will, with dedication and determination, overcome the scalability problem, discover the new algorithms needed to achieve exascale performance for the broad range of applications that they represent, and create the new tools needed to support the development of scalable and portable science and engineering applications. Although the focus is on exascale computers, the benefits will permeate all of science and engineering because the technologies developed for the exascale computers of tomorrow will also power the petascale servers and terascale workstations of tomorrow. These affordable computing capabilities will empower scientists and engineers everywhere."
— Thom H. Dunning, Jr., Pacific Northwest National Laboratory and University of Washington, Seattle, Washington, USA

"This comprehensive summary of applications targeting Exascale at the three DoE labs is a must read."
— Rio Yokota, Tokyo Institute of Technology, Tokyo, Japan

"Numerical simulation is now a need in many fields of science, technology, and industry. The complexity of the simulated systems coupled with the massive use of data makes HPC essential to move towards predictive simulations. Advances in computer architecture have so far permitted scientific advances, but at the cost of continually adapting algorithms and applications. The next technological breakthroughs force us to rethink the applications by taking energy consumption into account. These profound modifications require not only anticipation and sharing but also a paradigm shift in application design to ensure the sustainability of developments by guaranteeing a certain independence of the applications to the profound modifications of the architectures: it is the passage from optimal performance to the portability of performance. It is the challenge of this book to demonstrate by example the approach that one can adopt for the development of applications offering performance portability in spite of the profound changes of the computing architectures."
— Christophe Calvin, CEA, Fundamental Research Division, Saclay, France

"Three editors, one from each of the High Performance Computer Centers at Lawrence Berkeley, Argonne, and Oak Ridge National Laboratories, have compiled a very useful set of chapters aimed at describing software developments for the next generation exa-scale computers. Such a book is needed for scientists and engineers to see where the field is going and how they will be able to exploit such architectures for their own work. The book will also benefit students as it provides insights into how to develop software for such computer architectures. Overall, this book fills an important need in showing how to design and implement algorithms for exa-scale architectures which are heterogeneous and have unique memory systems. The book discusses issues with developing user codes for these architectures and how to address these issues including actual coding examples.’
— Dr. David A. Dixon, Robert Ramsay Chair, The University of Alabama, Tuscaloosa, Alabama, USA

About the author

Dr. T. P. Straatsma is the Group Leader for Scientific Computing in the National Center for Computational Sciences, a division that houses the Oak Ridge Leadership Computing Facility, at Oak Ridge National Laboratory, and Adjunct Faculty member in the Chemistry Department of the University of Alabama in Tuscaloosa. He earned his Ph.D. in Mathematics and Natural Sciences from the University of Groningen, the Netherlands. After a postdoctoral associate appointment, followed by a faculty position in the Department of Chemistry at the University of Houston, he moved to Pacific Northwest National Laboratory (PNNL), as co-developer of the NWChem computational chemistry software, established a program in computational biology, and was group leader for computational biology and bioinformatics. Straatsma served as Director for the Extreme Scale Computing Initiative at PNNL, focusing on developing science capabilities for emerging petascale computing architectures. He was promoted to Laboratory Fellow, the highest scientific rank at the Laboratory.

In 2013 he joined Oak Ridge National Laboratory, where, in addition to being Group Leader for Scientific Computing, he is the Lead for the Center for Accelerated Application Readiness, and Lead for the Applications Working Group in the Institute for Accelerated Data Analytics and Computing, focusing on preparing scientific applications for the next generation pre-exascale and exascale computer architectures.

Straatsma has been a pioneer in the development, efficient implementation and application of advanced modeling and simulation methods as key scientific tools in the study of chemical and biomolecular systems, complementing analytical theories and experimental studies. His research focuses on the development of computational techniques that provide unique and detailed atomic level information that is difficult or impossible to obtain by other methods, and that contributes to the understanding of the properties and function of these systems. In particular, his expertise is in the evaluation of thermodynamic properties from large scale molecular simulations, having been involved since the mid-1980s, in the early development of thermodynamic perturbation and thermodynamic integration methodologies. His research interests also include the design of efficient implementations of these methods on modern, complex computer architectures, from the vector processing supercomputers of the 1980s to the massively parallel and accelerated computer systems of today. Since 1995, he is a core developer of the massively parallel molecular science software suite NWChem and responsible for its molecular dynamics simulation capability. Straatsma has co-authored nearly 100 publications in peer-reviewed journals and conferences, was the recipient of the 1999 R&D 100 Award for the NWChem molecular science software suite, and was recently elected Fellow of the American Association for the Advancement of Science.

Katie B. Antypas is the Data Department Head at the National Energy Research Scientific Computing (NERSC) Center, which includes the Data and Analytics Services Group, Data Science Engagement Group, Storage Systems Group and Infrastructure Services Group. The Department’s mission is to pioneer new capabilities to accelerate large-scale data-intensive science discoveries as the Department of Energy Office of Science workload grows to include more data analysis from experimental and observational facilities such as light sources, telescopes, satellites, genomic sequencers and particle colliders. Katie is also the Project Manager for the NERSC-8 system procurement, a project to deploy NERSC's next generation HPC supercomputer in 2016, named Cori, a system comprised of the Cray interconnect and Intel Knights Landing manycore processor. The processor features on-package, high bandwidth memory, and more than 64 cores per node with 4 hardware threads each. These technologies offer applications great performance potential, but will require users to make changes to applications in order to take advantage of multi-level memory and a large number of hardware threads. To address this concern, Katie and the NERSC-8 team launched the NERSC Exascale Science Applications Program (NESAP), an initiative to prepare approximately 20 application teams for the Knights Landing architecture through close partnerships with vendors, science application experts and performance analysts.

Katie is an expert in parallel I/O application performance, and for the past 6 years has given a parallel-I/O tutorial at the SC conference. She also has expertise in parallel application performance, HPC architectures, and HPC user support and Office of Science user requirements. Katie is also a PI on a new ASCR Research Project, “Science Search: Automated MetaData Using Machine Learning”. Before coming to NERSC, Katie worked at the ASC Flash Center at the University of Chicago supporting the FLASH code, a highly scalable, parallel, adaptive mesh refinement astrophysics application. She wrote the parallel I/O modules in HDF5 and Parallel-NetCDF for the code. She as an M.S. in Computer Science from the University of Chicago and a bachelors in Physics from Wellesley College.

Timothy J. Williams is Deputy Director of Science at the Argonne Leadership Computing Facility, at Argonne National Laboratory. He works in the Catalyst team—computational scientists who work with the large-scale projects using ALCF supercomputers. Tim manages the Early Science Program. The goal of the ESP is preparing a set of scientific applications for early, pre-production use of next-generation computers such as ALCF’s most recent Cray-Intel system based on second generation Xeon Phi processors, Theta; and our forthcoming pre-exascale system, Aurora, based on third generation Xeon Phi. Tim received his BS in Physics and Mathematics from Carnegie Mellon University in 1982; he received PhD in Physics in 1988 from the College of William and Mary, focusing on numerical study of a statistical turbulence theory using Cray vector supercomputers. Since 1989, he has specialized in the application of large-scale parallel computation to various scientific domains, including particle-in-cell plasma simulation for magnetic fusion, contaminant transport in groundwater flows, global ocean modeling, and multimaterial hydrodynamics. He spent eleven years in research at Lawrence Livermore National Laboratory and Los Alamos National Laboratory. In the early 1990s, Tim was part of the pioneering Massively Parallel Computing Initiative at LLNL, working on plasma PIC simulations and dynamic alternating direction implicit (ADI) solver implementations on the BBN TC2000 computer. In the late 1990s, he worked at Los Alamos’ Advanced Computing Laboratory with a team of scientists developing the POOMA (Parallel Object Oriented Methods and Applications) framework—a C++ class library encapsulating efficient parallel execution beneath high-level data-parallel interfaces designed for scientific computing. Tim then spent nine years as a quantitative software developer for the financial industry, at Morgan Stanley in New York focusing on fixed-income securities and derivatives, and at Citadel in Chicago focusing most recently on detailed valuation of subprime mortgage-backed securities. Tim returned to computational science at Argonne in 2009.

Reading information

Smartphones and tablets
Install the Google Play Books app for Android and iPad/iPhone. It syncs automatically with your account and allows you to read online or offline wherever you are.
Laptops and computers
You can listen to audiobooks purchased on Google Play using your computer's web browser.
eReaders and other devices
To read on e-ink devices like Kobo eReaders, you'll need to download a file and transfer it to your device. Follow the detailed Help Center instructions to transfer the files to supported eReaders.